var/home/core/zuul-output/0000755000175000017500000000000015136370162014531 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015136402310015464 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000312206515136402165020262 0ustar corecoreuzikubelet.log_o[;r)Br'o b-n(!9t%Cs7}g/غIs,r.k9GfD J},Eڤ펯_ˎ6Ϸ7+%f?長ox[o8W5!Kޒ/h3_.gSeq5v(×_~^ǿq]n>߮}+ԏbś E^"Y^-Vۋz7wH׋0g"ŒGǯguz|ny;#)a "b BLc?^^4[ftlR%KF^j 8DΆgS^Kz۞_W#|`zIlp_@oEy5 fs&2x*g+W4m ɭiE߳Kfn!#Šgv cXk?`;'`&R7߿YKS'owHF6":=3Ȑ 3xҝd){Ts}cZ%BdARO#-o"D"ޮrFg4" 0ʡPBU[fi;dYu' IAgfPF:c0Ys66q tH6#.`$vlLH}ޭA㑝V0>|J\Pg\W#NqɌDSd1d9nT#Abn q1J# !8,$RNI? j!bE"o j/o\E`r"hA ós yi\[.!=A(%Ud,QwC}F][UVYE NQGn0Ƞɻ>.ww}(o./WY<͉#5O H 'wo6C9yg|O~ €'} S[q?,!yq%a:y<\tunL h%$Ǥ].v y[W_` \r/Ɛ%aޗ' B.-^ mQYd'xP2ewEڊL|^ͣrZg7n͐AG%ʷr<>; 2W>h?y|(G>ClsXT(VIx$(J:&~CQpkۗgVKx*lJ3o|s`<՛=JPBUGߩnX#;4ٻO2{Fݫr~AreFj?wQC9yO|$UvވkZoIfzC|]|[>ӸUKҳt17ä$ ֈm maUNvS_$qrMY QOΨN!㞊;4U^Z/ QB?q3En.اeI"X#gZ+Xk?povR]8~깮$b@n3xh!|t{: CºC{ 8Ѿm[ ~z/9آs;DPsif39HoN λC?; H^-¸oZ( +"@@%'0MtW#:7erԮoQ#% H!PK)~U,jxQV^pΣ@Klb5)%L%7׷v] gv6دϾDD}c6  %T%St{kJ_O{*Z8Y CEO+'HqZY PTUJ2dic3w ?YQgpa` Z_0΁?kMPc_Ԝ*΄Bs`kmJ?t 53@հ1hr}=5t;nt 9:I_|AאM'NO;uD,z҄R K&Nh c{A`?2ZҘ[a-0V&2D[d#L6l\Jk}8gf) afs'oIf'mf\>UxR ks J)'u4iLaNIc2qdNA&aLQVD R0*06V۽棬mpھ*V I{a 0Ҟҝ>Ϗ ,ȓw`Ȅ/2Zjǽ}W4D)3N*[kPF =trSE *b9ē7$ M_8.Ç"q ChCMAgSdL0#W+CUu"k"圀̲F9,,&h'ZJz4U\d +( 7EqڏuC+]CEF 8'9@OVvnNbm: X„RDXfיa }fqG*YƩ{P0K=( $hC=h2@M+ `@P4Re]1he}k|]eO,v^ȹ [=zX[tꆯI7c<ۃ'B쿫dIc*Qqk&60XdGY!D ' @{!b4ִ s Exb 5dKߤKߒ'&YILұ4q6y{&G`%$8Tt ȥ#5vGVO2Қ;m#NS8}d0Q?zLV3\LuOx:,|$;rVauNjk-ؘPꐤ`FD'JɻXC&{>.}y7Z,).Y톯h7n%PAUË?/,z_jx܍>М>ӗom$rۇnu~Y݇̇TIwӜ'}׃nxuoỴRZ&Yzbm ]) %1(Y^9{q"4e?x+ [Vz;E|d1&ږ/0-Vb=SSO|k1A[|gbͧɇد;:X:@;afU=Sru CK >Y%LwM*t{zƝ$;ȾjHim @tBODɆj>0st\t@HTu( v e`H*1aK`3CmF1K>*Mk{_'֜dN${OT-n,'}6ȴ .#Sqη9]5zoX#ZVOy4%-Lq6dACYm*H@:FUф(vcD%F"i ' VVdmcOTKpwq.M?m12N[=tuw}opYG]2u<ΰ+a1tHayɒ aY(P*aaʨ@ΰ<pX X{k[%Egl1$9  ֲQ$'dJVE%mT{z`R$77.N|b>harNJ(Bň0ae3V#b,PY0TEu1L/]MTB4$`H6NI\nbǛ*AyA\(u|@ [h-,j7gDTÎ4oWJ$j!frH_HI\:U}UE$J @ٚeZE0(8ŋ ϓ{BpY]Q4`Iz_*2coT'ƟlQ.Ff!bpRw@\6"yr+i37Z_j*YLfnYJ~Z~okJX ?A?gU3U;,ד1t7lJ#wՆ;I|p"+I4ˬZcն a.1wXhxDI:;.^m9W_c.4z+ϟMn?!ԫ5H&=JkܓhkB\LQ"<LxeLo4l_m24^3.{oɼʪ~75/nQ?s d|pxu\uw?=QR -Mݞίk@Pc n1æ*m$=4Dbs+J \EƄզ}@۶(ߐ/ۼ𹫘qݎt7Ym݃|M$ 6.x5 TMXbXj-P\jА޴y$j`ROA"EkuS#q * CƂ lu" yo6"3껝I~flQ~NCBX`]ڦÞhkXO _-Qy2$?T3ͤEZ긊mۘ$XD.bͮW`AީClСw5/lbl[N*t*@56."D/< {Dۥ sLxZn$N(lYiV =?_e^0)?]{ @| 6+#gPX>Bk2_@L `CZ?z3~ }[ tŪ)۲-9ֆP}b&x Uhm._O 4m6^^osVЦ+*@5Fˢg'!>$]0 5_glg}릅h:@61Xv` 5DFnx ˭jCtu,R|ۯG8`&ו:ݓ3<:~iXN9`2ŦzhѤ^ MW`c?&d.'[\]}7A[?~R6*.9t,綨 3 6DFe^u; +֡X< paan}7ftJ^%0\?mg5k][ip4@]p6Uu|܀|Kx6خQU2KTǺ.ȕPQVzWuk{n#NWj8+\[ ?yiI~fs[:.۽ '5nWppH? 8>X+m7_Z`V j[ s3nϏT=1:T <= pDCm3-b _F(/f<8sl, 0۬Z"X.~b٦G3TE.֣eմi<~ik[m9뀥!cNIl8y$~\T B "2j*ҕ;ێIs ɛqQQKY`\ +\0(FęRQ hN œ@n|Vo|6 8~J[,o%l%!%tyNO}}=ʬ-'vlQ]m"ifӠ1˟ud9)˔~BѤ]һS8]uBi( Ql{]UcLxٻa,2r(#'CDd2݄kTxn@v7^58þ Ţ&VY+yn~F8I !6WB3C%X)ybLFB%X2U6vw8uUF+X|YukXxVO(+gIQp؎Z{TcR@MSRδ~+1æ|mq՗5$B᲋eY(|*磎\Dži`dZe j'V!Mu@ KV{XץF .Jg< ƜINs:b zĄu3=Az4 u5'og^s7`Rzu-anOIq;6z( rx߅ euPvIɦ7聀t>G;_H;2ʗ6 h6QװxmR JQUbTP2j˔Ni)C)HKE"$ӝ!@2<Bq 2oh80,kNA7,?ע|tC3.㤣TiHEIǢƅaeGF$ u2`d)/-st{E1kٌS*#¦۵_Vu3ЩpRIDr/TxF8g4sѓ{%w .ʕ+84ztT:eEK[[;0(1Q@ET0>@wY)aL5ׄӫ A^%f+[`sb˟(]m`F3 W((!5F-9]dDqL&RΖd}})7 k11 K ;%v'_3 dG8d t#MTU']h7^)O>?~?_ȿM4ə#a&Xi`O}6a-xm`8@;of,![0-7 4f kUy:M֖Esa./zʕy[/ݩqz2¼&'QxJE{cZ7C:?pM z*"#窾+ HsOt۩%͟A498SwWv|jNQ=-[ӓIgJ8@o2k'Hr~4Z(I8!H G8HNW%1Tќ^?'H(^jJ=䄸-m!AdEږG)շj#v;#y/hbv BO Iߒ {I7!UՆGIl HƗbd#HAF:iI }+2kK:Sov3b:1)'A6@\2X#Ih9N ̢t-mfeF;gUаQ/ .D%ES*;OLRX[vDb:7a}YF30H #iSpʳ]'_'ĕ -׉6tfЮ$zͪO_sYq+q艻*vzh5~Yy;,DiYTP;o./~^.6+zZFD& m@WXe{sa 2tc^XS?irG#^ŲDI'H_Ȯ;RJ&GT.Kwj;of¬zHmmS2ҒN'=zAΈ\b*K ڤUy""&D@iS=3&N+ǵtX^7ǩX"CA⥎å+4@{D/-:u5I꾧fY iʱ= %lHsd6+H~ Δ,&颒$tSL{yєYa$ H>t~q؈xRmkscXQG~gD20zQ*%iQI$!h/Vo^:y1(t˥C"*FFDEMAƚh $ /ɓzwG1Ƙl"oN:*xmS}V<"dH,^)?CpҒ7UΊ,*n.֙J߾?Ϲhӷƀc"@9Fў-Zm1_tH[A$lVE%BDI yȒv $FO[axr Y#%b Hw)j4&hCU_8xS] _N_Z6KhwefӞ@蹃DROo X"%q7<# '9l%w:9^1ee-EKQ'<1=iUNiAp(-I*#iq&CpB.$lٴާt!jU_L~Tb_,֪r>8P_䅱lw1ù=LAЦz38ckʖYz ~kQRL Q rGQ/ȆMC)vg1Xa!&'0Dp\~^=7jv "8O AfI; P|ޓܜ 8qܦzl5tw@,Mڴg$%82h7էoaz32h>`XT>%)pQ}Tgĸ6Coɲ=8f`KݜȆqDDbZ:B#O^?tNGw\Q.pPO @:Cg9dTcxRk&%])ў}VLN]Nbjgg`d]LGϸ.yҵUCL(us6*>B 2K^ sBciۨvtl:J;quӋkKϮ듃ԁ6Y.0O۾'8V%1M@)uIw].5km~Ҷ綝R(mtV3rșjmjJItHڒz>6nOj5~IJ|~!yKڮ2 h 3x}~ے4WYr9Ts] AA$ұ}21;qbUwRK #}u'tLi'^Y&,mCM)eu㠥Ѻ\a}1:V1zMzT}R,IA e<%!vĉq|?mtB|A ?dXuWLGml?*uTC̶V`FVY>ECmDnG+UaKtȃbeb筃kݴO~f^⊈ 8MK?:mM;ߵoz+O~e3݌ƺ(ܸf)*gCQE*pp^~x܃`U'A~E90t~8-2S󹞙nk56s&"mgVKA: X>7QQ-CDC'| #]Y1E-$nP4N0#C'dvܸȯ.vIH"ŐR ;@~y>Kv{) 9AG ćͩ$.!б~N8i"1KФ\L7/,U@.ڮO?mُa ې!rGHw@56DǑq LA!&mYJ*ixz2*{_;IYJXFfQ* 0kA".mݡ"3`Rd1_u6d逖`7xGMf}k/⨼0Κ_pLq7k!dT x삖A7 u/~&ӄMu.<|yi I?@)XJ7{ޱ?Q]{#\4ZfR-dVaz./f+yGNMGOK?2_~3\z=y}^G$*A! IcuR.o=MZ9zu b#s9@*иrI@*qQN||Ix;I}&ݢ6ɢ}{]x}_o>Mm8S]~(EX{޹na4p9/B@Dvܫs;/f֚Znϻ-Vڄ`[nUgu$ B6 [^7 |Xpn1]nr CC5`F`J `rKJ;?28¢E WiBhFa[|ݩSRO3]J-҅31,jl3Y QuH vΎ]n_2a62;VI/ɮ|Lu>'$0&*m.)HzzBvU0h} -_.7^nya+Cs 6K!x^' ^7 l 2Jj.S֔(*CjaS:vp/N6I*x8"EȿQa[qVM/)fpOj4r!:V_IG^nILVG#A7jF};qPU嗈M9VS;a+Ӧ8E8zmMs*7NM~@6 ' 8jp*:'SOANa0rӍ?DT%l)gvN}JT(Ȋqm|dc+lQai,|Dߟ|, d#EjZܴv]pEO7}&gbXԈedKX :+Z|p8"81,w:$TiVD7ֶ]cga@>\X=4OZSܿ* %xccDa.E h :R.qɱMu$ơI8>^V Y. ,BLq~z&0o- ,BLqfx9y:9244ANb n\"X>Y`bb*h%)(*_Gra^ sh6"BzƾH( ."e)B QlKlXt҈t9՚$ضz]'.!-r"1MCĦʸ"66pE{ =CNc\ESD[T4azry !5yY~ :3;Y[Iȧ q:i Ǟ/"8Wxç,vܰtX-LE7 |-D`JLw9|fb>4Nu ߏ3ap5k_JA+A.A~ C~`[KaQ-Ģn9ѧf q:cT >to^ X]j?-ȇlCf0hM`~ ó}0W@o  K[{d+`ze"l |d;L2k%x90ݙ^Oe ]nHfS+.4<#/5߁ݛǪ0q,7FeV/!; 瓠 Li% z}ɯww"O-]J`sdN$@"J`Y13K/9`VTElsX|D^c%֯T][$m;ԝ!,Z5f`XFzȁ=nrSA8; P=uY}r/27OUa%~0;үM3Tu ȩ*'3IC~LG,?.?C3tBYpm_g.~>3ʄ55[c&-Wgy_jVo,?s w*n\7[cpMY<~/"˘oV܉T6nn \_ߋV_}Z=k-nn sn.*upw pX\_ U-C_wS!|q?E-S_w$-#9?mh{R 4ѭm_9p -h2 dֲ 1"j {]]Nk"䁖%5'32hDz O\!f3KX0kIKq"H~%.b@:Oec6^:V8FDza5H`:&Q5 ^hI8nʁu EA~V O8Z-mYO!tO֠υ9G`6qmJc,Qh: ݢKNw2taC0Z' O > f-`:F_Ѫ2)sCj1THɩhS-^p b~?.>, `0!E%ҏ:H =VՑӄ| Ć.lL t1]}r^nʂI-|i*'yW='W6M$oeB,޳X$I6c>EK# 15ۑO2Jh)8Vgl0v/eNEU"Ik dRu˜6Uǖ xs%P ع omWl҈sApX!^ Ɩgv{Xn|$̇d`>1Ljn떚F+B9l"UP۾u2Ja>0c0Vvގj$]p^M+f~@9{bOe@7ȱ^%u~-B竟} |23 Z.`oqD>t@N _7c$h3`lg\)[h+pHBr^J |r\8czEnv@qZbRT1e8V Scc6:$[|a.fpU`ZR֩bKgTlѩynۢ, "1LӰW&jDkM~# (C>ϭQ3{ߤ%EN;?P%ٱm -{2k 8Vbv"wŏݙmn&O1^'}plM)0\n ή ?Cֲa9H] lX9^vCο -vd+OUgRy2Я\ B0!% #>bJPUck\Ul'F瘏Y4Ew`[x٘p,>9V"R1I>bJ` UL'5m1Ԥ:t6I >jz(:W֪Ƹ)!fꠗe[XLE4atGS1px#S]MF˦NJPYDX%ܠꡗhl}i9f?q>b-E'V"mNf""ŦK9kǍ-vU #`uVi<s)/=r=nlӗЩsdLyVIUI':4^6& t,O669Ȁ,EʿkڍfC58$5?DX 4q]ll9W@/zNaZf% >Ę_"+BLu>'Ɩ=xɮ[⠋X((6I#z)2S zp&m?e8 "(O+:Y EaSD]<^(]В|Ǚ8"oRs?]\McZ0ϕ!1hKS`h0O{!L-w]ln2&0Ǚ'0=.T4G7! H/ͺ|@lX)+{{^s1V63 ۗI"*al NJ`Q8B\pup6_3XqCXznL9:{o qcuו8`n{ave=}OR9~yL Z1=W8>É R$|L ]OfJl˪VVg:lDԒ͢Zu[kWۗw{{7st08`J0ꨴU1|z:9dX)z2!S:'q9| 76"Q;D*04Zٚ ?V¼r8/G:T6Fw/ɚ~h?lUc3MکEen壹n\殸,˛_uu.Jssu/*47U0)l?R_^Uon̝f-nnZTeuu nn/*0׷տ·sHH?Et _I`[>>0ւcz/Adh.$@bѨLtT=cKGX nݔ͆!`c|Lu_~ǴT?crO e9d ljB?K_z>p%'3JQK-͗R>KkΤOq,*I|0]Sj%|-Ԟ = Ʃ%>H&t;9`>$& nIdE Ͻq*nŘʰNҁػ߶qerT(T Mnp6w} ܃E@QV'ٿ%ٱ$tgE{|-^WJuq,D~lO8|+D6y]\ 3EŕbwC!NE&YPN-ے<+F31sP;gZR~T"FXߗ/fR:CA1l h`^\d Ew'Rdt]fB%=ap?#i$ B57u2O>>c+kӇ05C|3 MiF{P AM7@XǸip >,x$;g)/& 0o%&CCRȐ3z#|18ǖiKӺK]MǶI<53QLV_>hAV,PބX \oW,eɁں{} `}8~d[6-)DS;pJlc$Յ"w*A5_ˑ'ySXjGElPb*)yߕ CV,ifq6! *n|?R/U. 3d)1i;xFQ]TYȴOH.y6 <#:ee!q{rꄁt#Ϸ"A|>h'$x͐OR9?SL"%hk2O&ADfuɋGTaz>/3엄Fb(AyT2 <E)Q( `2|39R2]|TO.(aBu%c:/-TѼ&5IJ~Kj]r޼B8S2~tVڤkR}KJO>y -F^/PyTg Ŋgaߌ:5;KxyeKE!I־7vMF,g,k)%hL N;Aď,2hDVy)bTRꤝw*T_s=w؄wJ/gG65:gY{o%$뼼P%[%|}x)ٙvNR,U7Pb,sgb(f%xaO [q6zA Obpi#&:;9FYz+쾚j ܺi{ r ;SNPTlg*>55&i1mowP.AX]TPEoLUH'ggj6{;?Mb:|8|,Q)rv9TnvuE P7< [Y%SЧ G{˴R']WXEQLԯR+?ʥg}vO2yrrA!X;owNYpi,KhEM y]r6GJvߐ` ]Vy(JDZM1xj R mޗ1㴍L"D_/>t (iJr)u0{;k_OI;/kǴ|P v)bӁ7L;Xs D|98p w, Blzbha&*yŰs,D uQ*"z!w^]otA^ιa6E^]*"wsv7!Ήx?<3FaH 0V ԞV(Ee2C J˧ρJ6wE+.cMQ+]*͎Osկk~.3KE[=6Cpq>>e.&`&WRw2)!ӉeH񈞔 P#$Z]2}Fg?ڮa\_=mumS(baC9>^F U8>A*~w^ pؖ$SvC]D~RqX VpX4Ek#^'r5;˚Gyqxl :Ce<\U^nl?nrE[ȥݧ@;<yEDjQG?mAK-ꍵ@lKTV"Wz Lm=3+y͆p.CDzteqg86&JqnpU86G"y{BZڍ[:yvmM8qST"lռ #TZۑʠ`Cs4 Vઊ筇?OFy)Alfy@@yQ-p:)/s4e5@b.wrm >{ʈ7I}yݫa)^k4L9bIY gZXedk)kPঔF]򬊻zљ) Z7v`yv3-3j\yZs8b+Tɺ6Yr%AqgtRWyqv#'}hAaշ@ya(٢OcWkI%a=LSPM#*ߍ̔Om!ͅs^`DRmtKm-9:eh*ҥW`Gɢ-+=~jC p]q ԣK]Ebpێk W噐`yZ*>醱+KeU+$;S/ 0ttNZ%bX "w2U~E$C$\V>98>08>Z(p:_q|փ?zp<{su UX49|+]k Cu{-nWH`YM{xI,jW Q9c8k%D3i2Tȑu%~ u\Ju~彙a`C2,2]ئyw8t6H5RWZ'U7Lm&FFĭg{zn~ WDr-q%o*=YE(LWxz(=|%ruh\C9|+LW\J$`LWjXϚݏL[mvng'w~ՅlYSu(pwRg3MGL75M=LDoMvCe&! m3E)ߦ%#x^dopOƇhej#O޿I? >38ֆm1ʈ.#3An#ٓ 8WYf"> fel fwD{2Y{5ܓY޻yil+QJPܽP,ۛBdtTJ)%m1BӶZxkxyWA+g1b2"ٶ=3t ̦߯^pc6ۧF$}6 ?wyTR ճꗼun\q(I uj-(9ICb"rrO@dSMٚ/\uAT(T@ԏlKR7 V6Wb׳w-G(?…zϜr$w2Ci!%n}2H] o۸+;["$ ciܤ,,ɉ٢}!%~$Mܝ$w>%0}#uwsOj^':O~+$UCoџ>Omg]\9Nw>:٥qLǧ`}|] 8~,׹Bxv7>:>wf] >|#!Tqopq#*@%\$~q/(yϓq-3_/&L*d4l"pZ UG'4f}K}pc^KH|G=~N }f1,NUeI%\&$CjJ2;ZeLNJl S<ݻ:0;:?;&IiZW ޅmN1w v`v7 OOB1eux2z;Vw8v7E:T:ݹ8 A9]q0wAA](g@uMg@/s`ľ#ΜEX2q] +3ߍ%AWh 4(-rہG9G0Γ7"EX,}Dt! nv$`06[#ޑ8El3poͽaxOM Phhѹ]>5B,(3vF0b@sfe\nVVQnjp!*M{l#kS vw]ҧ?7ʣTȰ=018YGxri_ȥ4ߑf[{qN5ڦziשN}c,mw5ƙiw8NsFnC9cfyr"%?׉g{mpt/rCSW.1 NvIyFҹo(uPNu2(4C @yVB" :wЭrqJhzwؽ00]Irmku=Xgr[ MP dn|=/Ani-C6N  rm f m#bp myfs[M>,qĵ ZѩסdՁk(<7t,*d$̢)>5Y߳B ~+Lb+h݃EFQ6htm ߟ0V1{SƠI E2C`HJ yL߿B:r`)VM%b᣶+E)z-)G!JxLUQe%Fy1)~4!@oBQ<8~&:ZnavcZ,a& քA nЮX.<\4()Is{rrc>xHPqgZdӊ8m[uDw=TKOqȯ' JP?E`IOX#G&QI lNKBs*9*ȪIŀV]u `U!a [1`xRM*-V ̇ ]bZ4oV1=k]%B2 :m{,sG-Ƶ?i[drOZ0i=^|H8/_s8Ó|r `w%,n>:Kh:% 6`NI^ܡE\)]$#n^)߉vypr|2%t"7[}~suh|o4oSt 5mᮐK ު&b3i"QgQw!lB%9U`9Gf/L@) TǓck9dHG>]{ CZxԋ='=:+EҼHX`uȧ'gφ o`gȷ <%e3N R-`0ߦ̲u\4 _$P[\=]~$P2\s}\gk3}9 B70z`*Jɢ6Yh1mTyts0^,ϰZ$+8FM0 a+ \3q8G5`%⨳R!Y:t9 ˁM=V-d#^}m%7c$tcn>0~mP td@5]:y7.pwr~`כL 0LvoO7e2XN[|&i;qf.ur*y 4,7"Ÿ[@dc7M2(T1 ^;sݔa6*"EGq,hPVM-(XdM\m/E7eA Jwn#JmЩt^v+ݪ_W;vݥk۝;weҲ-e()BPkYPk AQPk AeA-OP{GA-uu^Pg?Au]BPw{AuwBP,(BP|?A-^Po?A_BP{Aw B.!9\K i$5yJd*HVU|;N@'Y~9ND Bwf&-o]@nu>*YH)}tFܦqgRHVޛ5/=8SL2;뿝>ff}O깗$YfIӟAK`ZeBޤʼ;'6ǻhQsz&yàh0?O'Glѵg 'uZF ByN/t,(^ $d0-hʆ=r`i }qOLbyZ9{ ܳ]PP}UGu4ăJRNH)z^c$_a<:YFBmrO#$a !(]D2 To}xfTNGBz5'rF 闒9 Mj[Jx9O#l# *ޝ⬭~͏57"<0_R h4/Q8CWUage- Yd;~^3e6ʋ(~h|1crF0C4C]4'P_2N&I{HQ,o|fC8 5 a8c)9H`>3N,a:y?[sWEtv [m!Ҁ %XΐUãfL {*b"Ho%{2T<>MM:IA6ǍbG΅5aWj, g59JFƬ\eQS:I8HO$ #}ܳzyq/E|T7?gxo7AvO޾y_5+FҾ6OK^۶Ev[N+Vut2=>hwo g  :9o5ʺZ  `pEkO=`BhQrSx+#*L@4OpY/L1S<kV|,;m^|zoXkC6$ŶۧW$\Ct y-[6*-):>S>'-2ՃZ>q~݆E^|z'jmTH"= z T}]xzCmHxI3_A\S lwZS5M7CyfS h\JK)=uLvZr+_QϗM?;8T/dgu :)+vˑ7ց#q\A*+BIT{ġ8OA}$B'CFz<rK qk#^DFM52F*DU7gu.R#U46N:42Ckǎ5GocM]ٻuˆ(>:;kcN7H>lmBoAT8ԿNd+ sa7=MG%Fqdʯ"URcc]n4fS+\6gmӍ&(am6ZU'$'%Yf4falpb2JX+GJ}BAiRͮUh"!Dh]\Efb(.k".0YhlD[|PR)o񁠺jླqh`' V^}3e3q^ y$׾Zi)?&>fL'pg՝;[Xr".E8 6:~ 8%wq$ 3 >ly-69-XwdU51.g.DDW=ffxΜ{!QnPs~5L w?><_wOM99?~* ?N/-it"~{|IUNͿ?__y740o7^nn;Wq10\DPj3+m2AFX2p pjy*x2g ySTd("X:?٭G:Um= ( $3%*mS,CJ>ҹ{f,xR& %V< nT"x*TlH8s%D mI(GLKWSQ?ZӁC"a>zB87&XfyZ=sLGr綥/0nUtGnڛ1TZ%JjP<Ċ4M*reDմb8Vp'A͜}8N5%  <)`g|V5E9遤cF9(ȕ3\G2ATECfxYLKVvpNWt$Y-iQ,i7ʱ{؛4N]BZx"LIPC96 tʌ.B@s $YZF(c\hh$A[1kG8񬃶*Kb^AؼjLI 1&g#.]Һb ^4C<ʫUI@~vCc}Ibc&o8Mga#S̼]ջb0t4 !fGgC .gAb9km[FlU AG?';Qfr\bd a | bRMeϼrWKWJͶZyɯ3:84ȴl)/^4$\D@+ D<6%u30=qRYB {`id.VqIE!6,y`[bULbRʮ RV$Sw.>V99|I`Q) xI,x (`ǧ.NpgA ]"oIc f8,]9~W_3r8PLHjJ~RGh'7쁠wq4졉xk(ϽtcٍHiCU#r]c鬹 d%H .j.K*Dx$7WIKږX_ܪa VfIJru{:G?v+G&2 90rZU<ϔ3D p<{guC, i[>F3_`Y2J\Jij6K, ؆ڄU+1-͑JTr!T5}Htu(VczK9mW ȊCE> &|1Yj 6S},z˙)l#ZUwmDy5FXo˗.u5+[RUĭڅPw6FtW(ƁV F$uq+>8L(=W.Ӣ UBr?p2% ejxJtWve1BN/@BSJMpcE0ɂJ bƕ`j]B zS֮0BGj$q_}阤vK};5z8M. N.QiF/I(xֵJ*5(D.+d㜸u :owRܥ!"׻(W#EI?ޗ>]y-'g-kb^eu$/MGF Wc фbK]p Y2] 4uä[I!l낤SoYq/ YVGk1lGKxcץ<Ñ|yT %ff^1)3y}bcTu;/hKA40R[<#ُ^%' ~R 9uJooVU:Puםw8cabw3:KoNɷ[g΋_իJɼzRV[l]pbU3f甙NȟcvU҅)vޛ׉u.ŖAw[% }:.6Ϋ |l SRxGKW^#VvюeB$<3ZTA*JfXG,X&9OȋT\/z)[H``t"FadSˤMW+vy~ A|*/?=x|W/ gJ{0Guy@qY=Y( 1A]UraWEfj 0\^ZJUVC'c՝.VrR`-jaU_Cp  Qc82K!uP?<.8n#]z%8LTc:uqpZr) HXϟzX1x+PᖡkkN~]m:u',^Ue(qTqoqv: 9Xsi宁4{@oZ+ JډgS9tң0@G@J<7 5#SIJtMm~n]ptcFQ ( BA;%ɚXMtrcRK6%i;կrs{”+w]͎FffoЍȷ->t9C b{+-- x-&ҬK4mV@M.YCipvuө@+7i$%JzZ+|c QBbI~f"A|9%q,V /sa]@g/׵Hr']{0acS@TX?]v Q$]8^];S3^|gZ%cц!*ͥ]F\!t@B+S8n![\Əۏow*kMhI]O(Q%_KFn/&sŝw\[lEkw} ۇt~j~p)Eʖ<ȾBi*pmjΉ*XItݔ*oUMS(pr uM'RԸ*;? Ԝxn=V@F\J%Kt-C%LKTjR:A[Ubc>D-}G ]=aw a3xiƗYBr7?ɸfŕ2Fj)m\dR#6,]OSP1qpsߙZOs8uS5țQ'|ҁr<ʴޮw]׫bV 1ٞXـ٬cʌ2j |ܞ>u K瓞(i8n_~w pߗ.KQu)_ g8 ۃ{v`9xRG2SPExaR0d6y hfi?ţ],yTRw<Ȓl$nDdLxMY1<^{y֒Ͻij=ί~ժUE3Ӓ+fر]ۍX.WbyVFIeY^ <3ߌ{[I2F[P2+s?oůf|ryCQ*ʐ\nB/+ux-z~{&zW8QCTdptysmV1M(CG#P,~FQ|=?lrHثOS#jS6j袠kga0Lwӎu{8xdZ=6XٖgE1f֎>ΚH_Ne҄!<*k%Q7dO#y {{Ys_ Bg64Q٬{> .uƾQazſ3uY9h,Z^d( G 7He6c`4i'D"*ڸDDҹ1sZe]Ktup]eYAw"8IJ#.$qz=߳&u&^:ſM?1y ~f]O#nǷ?Dw'}Motut8a ̜Yl_1e$dA뾇L>f 5m;)խ I?To>N>0''og(P7 E0 ?*LYvhtSvON.1Hz \Ԝ(~PD? Lv|].Fd C`OQ9dTcD=&2ݍ>E^/5O)箕ae&.{;\٫] mt;NghU`5JWNX:oU3yG,ۑo.j'ydx, A;Kzz __D1@ jm#Vm(e:8O89fMWZͿ7.z X0{FeS;R[O}u dON5v$Qlwnʩ('S'ho2ܗ~/9 l69?/8ff\qqS56ZbN˟|馫&TH* - l|Y Xn.&↯M^jfՎzH8U,6q,k\ B_Ty*?@<^_ "qmtjWD#Lۓ 8?Œ̾?;d^+O@7WE' V[d|d?=KM&O/ADeÓ>kcO݊U90ӗo^6X'6 qcn+Hݞš.>ƣSFvBeXaE@y n|2V-XJ( 4PUr>VKSTȢծMk&+J:rRh<qcK;LU`+Xs|û3G%EgZ3ۺd_/5N;;xQ,p<:"R'rmq  :z47ɓ|)C:w{]zlpO}tQO e&]pδn6Zzp$6UX+;cq5yC}6cW9H^yFriPJMjmNQ_Q{Y˖ J` ֤Y}΁wH)/),>597ٵqXø K=~j{}kFaD3^Ggj4/=ee6#o ڗLpΘˏ_ETPAa5?/7޽íQu7kTDZ mQkc@AMQ{Z>%Z&6/ݧָm`3>a_3.|K\kT;aco 3֚Ԏ\x>~V#_}G"gژY<)q}+m(R"8Arrƃ5ƀ9\7쨙o5hƲ,A3K.k4`g^@Akςphm9bu2oQ32ol&Q=5f5 )a>!fmMD֞{55GiWHP"?K`(A-ز?D)VɠQųWVH͇wQqq+5w"U%5P8a H1NjEf˚l~jhG˛WYV`8Bb .]/?6Niyz`'G!!eK1X{ ѳo?7m&hkn&Ƿ?|ӝp$i{M8?ոηF“o.Ї` >93#4u;5k7 Iy@%Vi DIa@_oIh%YG%ᛃo~geIn4v#kgm\ᮤx c{h+In#=ULۋI$Ó)~;/܎QOCDU~ s|%iy퍣oWrڞs"Vm !\%]*(&H*{3a84{`4OgU=Y ֔q?&,LYyG'vvllᰴ\] m_^y/<͇Et̩O }M˘:# ؏#@G-w_b dB˹6o>;5O=Ρ?t _l*Q}]Wa%0#Bs=q鞗d֓V-.ɬAhKGV2$bViIUrD'.ɬc"UOiWnSzX2k|g\ ߲ ׸de1pp}W,6>6fxE#6֠15@3IA9bToi_ah~~qU  kx/%B0) sMq<Hl)2 !$W\N0DrFM^=a-Y=+Yx {ALu|eM MQW@p>5xJly"1R^H%}0Qc A9W@88و(M<h9HS팷9\xqR () 4*<䥦^qȱ`9BpRĔZK&^=h̜RAsTipmy΄6V8NIzlvQ $Yij(cDl~vlhƛ y͇<9⑷Y ・27@(gk㼑0,x`yri&ZUZ74F pH ,2WªE|; -;&J(5!xqކka繭1bMMW<\nV)Y(XR& Pl/d9C^.B!G`)YkRMAY;SVџ}JCv-(Z |GEq2BBLPu2|gv1bq_(+{KkN3+h i[Ga+}(B ̂uRK4ʃVEsBS}E) be1~\!a3Ͱɍb^"h39$XiJlxH55Q@;SC`Dn :L(b.2\P#k|Eh[ kߵJ0)!YEp V'>R SF@Y]{J$ FڰtJx.s(sբP*8rֶImܦh-𚔊_V ( ccaR1݄߽VHk6GY-Xx"b8_a\sT{N"G3E,Jh[dG0eXN1L(sJ<}ūuGgYݞKYD1it\0%a)el(ōӚ-ijSfp'7vt2M&%HsqRVce*v'qpitB]Cb-*9t|aRYlbSOg"3w \]M` "hDŽ6-=N2QI(;i$Sd)J 't FThټA^ ixh5VtZ#D(&i0|FJvCK Z<)cns&ِ[Lkjj8s"܆p P=;XG06Gf[t)HVqE@F48oCJEZ% xBDQ#ZAs4 oC Lyd`_5L-@;ΰr>dҚ]4\8:-RFL0Z8K-ZYl93ҐL|potl;csT)+mS !U 9Q =.m(5z Ұ{툖u+ϟ"g|,=|Eǣ{aF= +T jmWV޶;Bc%{,-P PMm^ViI76kJlT I7ki`Ԙ81۱-&!vo7$vnj<{bH<%F \AڢXĉ_<[mEب"kMf7ChEO@*TDKrJ5MQmYwo 2f Vg88V)Na5,`1[mηX|ᾶC@vYV< %ﱻpF3`}>)&A>^ {w95zůp8?bnt?zNm\Xc-R9شG \۰--wD+;XVZ'o¢.d֢<$.eU\xx;\+\b3 xh:Nu2Tډfu)5Rù?R߽Qř^ iԍgڌ'( Qz(D/ٌ`aws8">W,)*b q$)dN*JPtn_\>L MSq{&֪30"#;K˩f TAiut~5X,&"1BPL#v<ۃx;Yl=Cw, MMGkAJ/. kX#B|s)n $%|A`|^_,1_G_\0;|zDi#q2ͯ$=VX&o?_'?{}|LCey'ٳ 7Pv?09?ĵau o1ɃCNzrǿz(>%{4|t~~8~xz`w^aQK?C:c/dR:(sŻ}4ʿAoN0y3d8uPY֚m;D+˒%vgqˎV?߿;c6+݉u寫/azC ;&zG|^?fov*~a/ЃaiEf4?=y 7u;5vQ_":9mA[3X_SOU_7V 9ݾ̆ok6X[.axK\ pY }T a+?ϯwѸ Vq>=iz%_B7LoG07.kc" %!IU\ Ï>dyNl>TVO8I*PЮHGnX7+R i8 ZQֻs(-ڡuިq՞YW2 GבBIQn1؎NzaTER-|DYDR'&0Ә:JUj 2tdR&"[_uh*ϸ LJm<>`zHGpeAbƙYrv%͘mb;f3 V(}$nhL"E>v {λ*G @j$5l@zvaT=^b֫:JA8O҆X*5 ZK/ݟR)应+ qw{vEd{4*Dh> (KG/S)OE~<_8 fz˹Ǘ/AfsUb?$X^pѸۿ=ԉP^w/"љI(I-h߆_ X,:~Q7>&G|O-ZPBD X?#,UT/a?Ow?>s]JN/(ȣa[ָFw7D$c\4ȖN=nHviR:2cjX4Im{+^mhE)|~B( rТK=p!~.o:C| nzS\”Y4]Plf#f4ӠXd:7xAXi3 ܋IݥuM}N{ L@mgb ֦Z$Ì9 Nj)I&\$k*-WTwuÔh\W,uf +\f`_LA9a#:*P$Ć-HDX;SkWI.| =2##clFByȪFi2bm BQFgs fa|wŨ14: x52Stewx'IIƓZ`hD߽wa<MsdwKʏ͓1\_vRwûpt՚H{Y{7`mMBrv<-~3ݚ:mPRcGxOb _mc ГΚi\˔,/2Z&ަ8,ō[C3p >a|ÇfLp W/ƄBD! iNxIɫNyb:hQ~wMooG'G>Nd5oyCx`4mnpGo\fGP[A(Gȋșy?݇?ya^w=/COg ~[w}Cͩfo4Ғc-cse!ߔ4GcQuGmx':$Ĥ?uĤ9G7҄85XGhD3;-֦H2R+cSǢċx0_.,>!Cn9t w_Nڕd4ފ]0%^ ܇Ε`ǥ߆Nt8ɡ/K-\^֚"v'|mYohrxȚhJ ƺ~~?t?E't/~<7t=!˻36Ift%{ڣWxd>X_5.p~!hC9|@05&h AH F `tDzz4oww6>{1)G -Lw=hf)AiՓ`Vq[x!NTZ:Tm|mƶ_ؾ ƶL^v\څ=]Far~1@Kyi:et<^f#,)^SsqN]SsqN]S9uVT=v>Kj%6FpˋF0ˋFѾEn>j/@G}Re`iXQӫa,Oa1qݫB@~|v#,U4^,\!(](u6[* EW#:cKG%?fⱩ<52+-8OyG҇$ ? Wa̋+;=Qƿ9\:i||1J3gx).2觥Kq&]Y70pbAL.!/-IB'ˋX=%p`i8BEQZaŁn}]WcQ<"ɻ*7c, H t#w= .31Tr; =LsHW`+n>s"3Rz}e68H?Df~XX|$_%O C)u@jK X>FXYFXȽkΗTFXkT7dD_jJhv.;( 0, %Lo6vskD{nu%mQ-24e$)C ,)0k&}ƞ2S*҄,4JoWo7j;uv;û~ݶXzSO/CLoej֐1cG<)ʷ<>/d1+~ea׿l+l6 %D?:wKMXe$a,$UBba/طa Q{ C 2epdRg3AU&2%LIhL1&Xki .,.K[ӵyѳJ!}$7A>qp?(KmaLZyqwe=H~cnއ?/3{v]%U_ԑʒUJ*VnG/L* @Gf\5vwWzf:9ZH7SS戩itj+~Vw7Ri>e}(gs*&֖$ޱ,K2eRE$+eZ %]in  W0zGo`_Uc4Ƀߏ% yͶqpD"1PI^òo#e_uFjZ.w.8 }\q߮fY:s. N/K8N/K\8qIk- ;Y??tWlÅ2sŝ/%} E_BїP%} E_Bg@"1Õ8,nZ"&~r=]r.,92𙛺ju¬jQWK$)DK(/K(J/K(U\s v6ՅAUa>IkHnӉ^c>; }H=C IOfy{ww-gSr * a/QgƩ`-2x1[DF1ȕS%j#%1i_V-2mp:Fi1 k#K/<vTz(BHCDf[Ctxg:s>|y>uHL~os)fYQcͯ Q}o_=uiWӰ<*;ChS1eBÕ>G *Lfă'kt\D\(5ErO&d'9{٪@R3p0ycS IɥV(d=!=- \p(&pm(/;^g(yEYIm98CNɦ\} lQo2wź \A:yRy)܋"h.qBhw"'!ϕ{bD'_l Y56Y㭟±; :TG`v+ v&,D٦^rAh)Az:,)cWYBH@'AK%ʠ%1J*l/$^_ɯ]_ڂG "k",ne]D -קu\'u@5#|fjo, XO7mtMR\odqWr> n'zkFXDls5F#rm0],'w_ɿce1|G&춾alf|(bhS&j^6cZ،#f۪(r} BusSlwծ~S%z" TN[WR,7lƴnGޗR ùWHkN^HK<"kVK\IuYLgq}9mcHUqV"}r`ŨuNnxi-'oP$Wɴn[($K(jojLX<9f 0Gw pkp^'Vͭ~Ǭ8xW[m2mh|ym& etq\8~>cڷ#gMp|~0ʊHe ZSz2+[=CA`2'w1kI5w_LVaottϿ~5ugυɛ5:"< Tq"eN[ lz_A:htD#Espm_, @ Ij"y,IxAZM:hDcEޓ _&߬0P?OpՏϾ}3×[yw9$9ڴ~㈿|5fw]x5ЖWj?_({|+|@zEcWt5YQ3ʾ_Ǽn K1 29"ćkMJ4U[C{jHs>yz+IŒ)C/#=zPo7OΎpp-Wj4`5<"l H {J,j18ZQrtAe% D"@%3wD4j1Gޗ-@z*3ZQ],;V8z!O\˃Č֬%k-ⱵlXr4gc%آ٘)S [Yuzss)+Y9qOg(úfh0V+Q#U{AqrrAciw ۱y? ǑQNhs#(.qhr`ΥzSA48olntl^`px57Ό;bK %;"S|j  BwIS:19\~|ulG:pEk|AYpZ\6DE 0L?- Ñ}{5XO8 6<;: PP޷N q'b{Ymg`Ru [>&6a0GEecdVe'14j"$DɈO1[ƑegK̎H9`2:W i!%IEJO:WA|CޓBС5sX;5bc!Zv.1|վÅ E2\s ,_Ir , 'U|\|7g;xKjd:Y36^鲀t*2w#L7R]Cq:8'Z2{bK;| Ӳ[y_ޞ81&?!SjWT! i`&16,b97sV7c;wt,#ߨO//ZR^$Ft̊$i}-yOyQsy8厫¢V@XLFUPAQ1SGT@UQ\.~nNUb06Z Eܥ2K}QRumpoMBڟ'h_jI//^Mz ZWOjEzؤӫɶі+ "g &??sJh.NW ?NsXk)Da]-XVh!Mbdo _ &ob?"_Ol.&䩏^.**LE5kr[`[3vtGOZvܼ9^;U>3)y|}hE^MxjWVxn>m^ux1vEWxV27-Znw lc|{c]jd*c? (3AAһ4,B@3@ƶZ?L"?:=9>"B?Mx5nǜqZ`9"q;c<ϥJTB2N%*jXt%*q@4Ao+ɉ99QXm50,3É5xRMu~HUM^xdyM6 0K!"y?!n 3"9-'^CroZ0Ƹ㏱ӾGe5W#:i3 ^ɠ1tN*P,jq%こFNZ y_R<@5S`K) ctwi2Vo;L-/קm2XVHtCmC䯠T`>`7TǮBq`Swʜt$ h'|ϨMOHr(HLib6`<{b!B " Wk((b,_MEfLbcIuȔ"@S -!/m 6:cXHO\2tґ#.Xヷ)|1ħxHĐeգ鬹a> 2p249/^=i Cޓ"Aѭ {'La7e80ߧ^M6n#2boeE$Eu12,PgZ e1`n…'ohv9.Gee}#ybg{#TGɰ%zTZ bVk̈́Pm&c4G[u\ A=jTz^8VИZ Z= 9EebsBuau}#Ap0(eYD:#I`uda{SH hCm(,7|w7Klz WpD쵱Vp 1I(Yp5%hb.1y_fˡd#ԴG-Xn1TNߍFmkjy_H>&EM خRHhz7h(ܴn4v2ꉜ];y[[KT|))bVU'!y+Ө@eg2ɷQ1eOr:dDt!H|XѤo!ho\GKuvܯs1l9y>?ͧ#X1ƈiY7`d};Us̶å,.ϳ}Geؑr^s'J$;7Ѷ Q};`>8y3qh#=C|[풎1F|QsX/3}S\ DN{]!A9E"B$clNO !x{뤉Yp8 ،bwfГMz D+UP)#VP64;m͈m6yR:8Py߻bߎON&&1ƈc#-u/xN a4o*\%% *0Ki$d!ڵP}YkG: l,kRX阋aiz}L6NFdY`OoQOtl!hlw.tx>4b|6ޑ0"~pwwI:}giTו+BJ>YJJU*Y*%28;>2ƀ -l $Wmwn߭ m r75z`\ac[Kn7!Z)ɴ+ES$M؟itٚрǺf4|,WyV 篮 w7ih=jk氅tϸD8ƖK&&FL6H̡[L>W)kFAKF/ͅ 7jb=ou/b uε\Z!?1@#gBPOam}/T_nrZ:<]C.T@.(zF5ZD mI|y[.\5|q>2b)PN'%c6V j}iwwIW}\98-{l+, (_ aptsUZQ_Huu|,S Uۂ͂sfaX&,,,/ sv7I!nТlw~ <_#f}Sfe曒Mjkj P~F{v15Oxs4a- @h;ڳ6GP_x{[ 'W(u@{ ۽I ࡽIX b ރ}xH^]Hy"'Wsb< 9   n\Ad<$3O+ǙH'V  :Vt !]#0kΰo4 y_k<8OH л߄|GR?I |[#?C1d'2v nxBS G [3G [^ s2!YWr6R}j0;p?xJ躅opN^*bry| V蝪s!5aPJe dx=Sff\֛נmwx H÷!t'~mMgy'5S SVU+s+!CMcfހn|3W[/L eHG@d@[)T h 3:_vfTx>3V| < μh3lqTx]y VMWQ*Ki̳3SVCCz0LIq&/3 kfzjƬ}BFKs|ghF=Y)JC,Af۹T`TYc)Vy.ptO9|4e+8hWpEد zׅVc):zAW?O?|6>KP$fy=TVַ4mg`/?#(fSo[r\#g -Z[*i6:wq^ 8: |B;c~{ ZX] UMg1U3^ x7@Tb _BVxofbKf<<>;ЧxӸ_i$'@nin 舻*Ѣ {q)Rq1N8[r>,XN"؀u@9[A%>+A(4,>Zp`~;%$wצ|mnnk`za,1DJD:o[)ӭ‚ikuD]]+"|fŠ)yzoLzC2$&/:I܈K'#4OL#RQ/I1dIUVj+з;Gww2ZdU2d`8ζ@G\ +YzOdQRMm4K=pBp>4E~hPqFyFЉ9_<r)~*ٮ `^aoO6d3g> #1D9ެ*k䤮IKnD ,g"C˒CCݴ1|,SEͼ16"ٜaC$$^Xo~mܬY{ۧPˉd 4Ϛ c,p^Q?@qP/XexLLs?P@T[JʅwIg3|,#*w4z ZeZĬp5M{v0E}TA7UӱsdR F_"Ϛɼ>%Aӱh3% cvatO/g#6۾!=W4O0sZ;sU3{eb^R; |T3h$^  &yV!3̯w_0nkH&|FppʇT~ 5AcY_R/lZHoE=D+R9[*sD6[*Ys^%z޴8`̵ca+F[jʩ-+{SܲͪW1^7K!O ۗy;Ցp5=,ЫLb۴<5F7N3)&jD֦JʘD4.$W"3Cݪ5f\mwr0qR">}Y^j G,ROwނ5HU49K,WC-w_}Ӟ'pS"gAtPLl*BzU?J5J޻r йWN{ej˺ԥwfnXB寮DKu$QKͼ럿#«[2|mf{wrmv/TVzD~&KZQb4">Jk5c>kMl/A0DOj\N!_uD4'͖']B4~HH@B"OuUaŒZ8X+%gTzj G4צtgzʦ[mZ| >Nv2AɕDrrA:wԀ^cÒ2 IA*wa.kcpyM27ƱѬy 4+*C M+/y8u^[oϹrLkdp=)dH+&R;8h !JK6*ZY-\›ט{וY5NY?<#3&}KAE]AE˦O2T麲Λ2ALpr׺u!zGϑ⪢lj8܌.4D X/-L32$--y@wl׀ 'A,2X F~7L]ץhDf}zȄj6.ll {"  ofvbkIC\ yN\>IF&q.dj hB?|j s>O/*{xK>F`#r+ubv}D"sHhyTt ɍ-'nb-.wAvVx1N+< 6b QAƫQA~`rԨLLkY$.XlbRLR]3gfYw4"3OeYl:]7;!,8/kclhF\h==sf+4rFTٲ%X =@#2s(~% %dri@8Y=4E ńt:2+H=D&T_*BcQm:V+fkY#GG044=Sd1Y d hu⸜X3AПZ=O`C6 Јq3lQ"@ `)r.&:%%4LY"U:eH(] VU%2_q"-MˇO+Ԯ}uhIa .h^WBLb[z (+u<2;&.kǥн+.2BBW".U)sxd=28_!<)/S=-Dpsir@dЩ1Yw|;23 :! *J\ >ٽ + Ј>|r I'?Y}B zsFdɾf-P̙}dw'dq 0yfh]B'&=V&MLQ0ڧBcwkT;=rŤiO(_htO6Cqxs.>_. 4 QLDQXD6@4Z5Kic-e!z9bʘ!`PkHH!(ְa"T'֫qF8uLn|k0-n@BB=9H 7 u=1j~]CioxQ&b;%F7Q@B: P=V@!z}c n YXmZvHa\Ծ9H5Tyu B(AY=+O3&=~$0-b (>56`~wEVP ۹ϯoz2 (bqacNڬEAʃpnUSDj!s?SX|hBa|Jպ#1C=>r% QvjA6횊[S1X)lE kR!vm7[t C80"j_LJ7Ӱ h5M8I鏀 Ó?Po_MnLIm'R7;6Qޮ`\MvF4 =;E\hZ9(/ ;u!,͏6swVטV>~*܇;hCe!0XD6@԰,a,*NXXH|,\MݦBץcj6FAT*ʪOy%?=6i5D]=oS:yt~6A9Ǜc7 |t8%zi4(GjUҨq;h ur1k)4b']h㨺&ErE}oP2m%wf` ]OzcK-rm/7sn]Wub, c ʙN -p{pt.ѐ uQYUSdd"K^t14N\?yzWB+fĂ1k(Z9یp)@焖` Q!Ptsrkfի`,UR[HxI"ƘJ- aYNFPp!''W j3=nNd*YV{ 198 ʨe-.%Ō%֊N0ćj 'I%y2|Jt1o2[t 1 aT? 6$3"YIʐo_mvD]v 1"fc Rf* NKRC\8ZҮMDTXƥ *ca̎ŲgKC#QoU~nCVIkR-&wš lP+zXUlT&jqQ*Y,-I;(7ݹ*io?(pWiQ#aAX=)yD8$)ŃҩH/M]K 7p w",VL85_k귻ECpo?VnY} ګCn6!_?`u??_fnU_kK~=mg-L;h8n&elb=&mRwZV G2Ƈ76sǺП"_oqYd~fk Ex\wVHq\eWp0tNXN/bf̱R%"E.dU"p|O̷_ȯNjY8)X4TLr:@`gh HgEȇlǎ&7a>.(n5)`tjmzwϧ7eage7+fRy`om`w7:+^~yinzT:kzs%Q/{e5Jje=YԮ"OM3s9Sca#0ʣz1>Ī S|ķO0x{{C܃|?cX2!f V ix!0a SR~%V u"4_ 1цq ˣXoUuS'c*1SZtV™UNijx+KXלL.lOXS^:G 1:c(2RV#M0hJz|VZ.^L+rMCx6ɎL wQvf%˟UW>k %L /8 ԅXVosC[#y{ks"+]LF!>Rce8jB3R&>wYECLO48bKg-q3/s3TCۈN0ĘzŅċIaי"MDZ%ue/,gż‹ृC<ۈ`*ޫp#|f{UwVE̢.?lwqy Mt}hk>Z8 10 Ý[*\/,o_g&9\[0䯧_|~͸KIO'3Lj[R"D: VnΩ,r.&1ko0M6TN16@*2Ow_bP{fz :d쪆ʪG8#h|Sh% 1sP38/n6mrc)eKmsuU0"*-@kPH D|=9U{yd'ƍ5F,Kyx&c`#*AtK7}`ֻBh>zK,B/ʍiD*7ܖ}*fT*u=+;}[XU-v_*eIvX¿ @`+7y4U|[+< !>p݅+fT {cB 祋@2LsˇjσUS unwbt1l&R1:/iVgAib@! f*-q&C[OMhK!tuXyv ^=뽾_d2'MJ,һeԙWe,vG6gQ[CR<ޖxHPv>>.PS+Z(T6>>d N@J9c".\qۣ_>kc6ԭ"?s7lWQ-RyWb|a}>Γ+ O;rIB)@ֱp]ڗ#C۽.}^寡n_'b5l"pN$/4S_}J$VIx>q?9 q3ƉO2|6&m;p0 tQ}5MCPE:}uiF5RZd g[8hgpX˯1wE1IcҭN0-U5R`aDΒΊ1-w01gj~6EBD@&aF&\G ZErU)Cr0J#0ȡn9oK|CeauҤ,P2RrXlU,f2۴wdS!>(=pą]qXhNlVPBYIQ;a]=h*?LR^NdXۿ 1ReV2)^l|*6B z4 '7?+mBw2e\3RxF&V$8ZX@1w|mB~L=|3FL %*ԝNF+ iJ,dQB'OGSRpzH3!Z."Qjk'MF@sEn@d(7*JrqFm8ja,Kـr۷k N~6Pn9tgO;xRNaLji*6GWPeSDDH?.*z<}ox}CݷϢ[[2F~ K~ߘ:`EL$ Z TgqW9YL@ ^:\]Ko#Ir+Bl88X`}|JIݿޑ%RHEf 1-UgdEEd~oGN~[ ذo|ӇN8[ƭY\Eww-Rs8].}4Ϊɇ9ƥ/~|}<X~8 3ro]y0~SL~<~i`m)kXTv_M=%F/3gL>/u'GI('b:|U~/83]gO MH.G1QJ l ɍ5OM\+ h:/<A'*i()/و$@ 1Q1e.$F"(>-CG3$ (ΐ(yMS1֯6^us&:٪{S )r-,A9 JTP9u*350v``+ Fɕ0:z&|Be`skHd"tJDĂ3HwV,3>uJO3@]xkߴ9x =̿Go~,0ϿG Q?ض[pß!uJ]Ӕ#G#Z8H8c]4^]%b״RfH6qzfnc0bs7DƆrMS=R BW|GGeZ1wAl"&$;c>ipDHt]of@El=@HqµP'=7qV ~9V@C2R/!G֨pDdƱ bg.\/:IJFh(*vÜj/߬'m ?g&*âp oB wKc49$R-(HHq2b*'-$-P b @]T:y!^Hul=Xst7q[ob^1W|#W-dM|~u)_g$nϿzYz1=]ц0^8.͕DkgMqʝ3+OtU]y&4J#|5Μ6;v(FР I+J|+=)GS4ɟ^R N`8mJ I>Q.J$A4@>8M~)6Cj (g#uF'v1`M &Έs`-OY>C@,*j"H:JAF`0Ib !;iPgD:6>,:#Sڳ9Wެ1++??Ɨ|οY;ʮbr!X+}"#Qׇg[Sjc`'7qtSŗ=_ſۗg-{7g뛼z8r{v3umvfO/큌iz,9v\؋i|MoGZSm Mx$2\+ 'dֈs,"'ZD=Z~P1orN闢Ahw*\ ٽg6O"m*<ަY}@i!_z7sV(Zo(} _En/f.Bo1o_ޢ0q%aM*!kׅ~kc~kG95@; ֯ -·Oc_ۂX`Dp b*)t\K\o(@鷘 qGr< ,*JbC SP* ¨x=s~ml#nvn"%SD8 HV^hJ3~G9su`*hqhsϺߘ)9wEeSccLdfE1?bbB1 uV@NJ? _4qׯ;$oQ\k6ES@ @hְB0Md`z|P ƈRuAS2x>tDѩjc*bDuaA\@"8f(q 02` cRK|BY1_t2^Q訇Fc IVɐIaptS4:f$иlK1@6JS!8\4yC@.[iƽ6 Co̗ !y!?rޮۘ4ع)W%4Q01QnD&ƛkHzh4f  ) g jcS(!PCs3kY_<\nZј9c̮ kwl㣟d[\!|q3 _tlۘ ~O)(  q"%>*ݓ-ј9TT:{_zQYPvoV0ry)h.d> l,h"(GNp$ցQf,^1;d|*Q=43G̡ǀ4uXrjnrPMHHD12Azh4f/9<3yE8``-Ny7vOFSPLH$O5-\;VpN2BZ1bf i*$u*9_C-ۛ7f?u}\}ES|:Cp#sFNyK+#UOgV6悪ౝ\NA(*]1K:Ex(uhvMl5thYq(k5RrSǻy eM`pFc0Q9k03iSRxD3'L]W܊FcTpp Zy*"5r`ɛqQ2qp7 C1sdu8AC)XE-(\`@}HqK6gH>;Ql>,QTzcT(Y9)r&y) ЗCw np9֊lHpк(DvW43eJYH ؇1:p"r+/"UB­h4firnQA)E?v`iI '.y8h !S'ј9N]\U):t(4k5Ax(5C\!WQZQȈ\$°6jC-s! 5@EXD=43BsE? 7n>RАXQ8!1Y`MN fH$2'wq@eH ́hHRc^vrk4|EkdI%`jFyk6Of?Lј9@}܂hxiCJԋ/}DGy,a$"r 8eM=BAtA~LKܰA7dF+RE)(\6԰ǧygc(b WQ hN]Fc?Rׅ:){:"'4CL "5֔:_V43GI]^N3B,q iv8.)cALJȹ\9\)*ؘwŽE-F=+D7笕A+쪜s4X4;Rn$ޡ7˶حsfy@=.I4fM.ya9 UNhJ V% g,|N'igYjKQsƈCG ȸHg'iWU0Z* gxNh$ q9'ň WJE'9@b9S;!lt%QmeS4Fc4h2\9?*=43GZ@CplcvcL fӀlمۋ7f:Iua9LUhպQۼ}MQ|ZmrHOhOa4v+,KcPvlc.T(N=L;X0mX77f܇q⯢1/Y]a2 =W[RU!yTYBQ3Ӕ%RhaOÁD p΀ғI21i۹wE1sh}wyJ [ј9L8 mSi#^7!DIȗ`smj*O%s ]{oG*DptQ@~Q(MB4P<}PG:/~?,F#ب% Åj)`rvL8Mlj?*4/#YQt){]j~>uLopn4jZwЍG_!*RWE ~Wxzĭjt~~trQ #L̼_ W~~|/n\孩t&27a]hp6r1)sKv8Ll:̫릱\Ugy!%PD8N 4Gy~1"PXbPGo[1 BBb:sxμqݝ+C˶>9y pTHAݵUp4??Xw iѹU_X|>CՂ::]̿ƃy f) Y%8,2wG<k#A- [㡌mܨ5Gn" Cppheq E =r{m k9B.F"qb4"oǡť9ZHΔt̥:O#@g =ۊnsm09mbankn්lY/OkKx ^ERps ay](nyV+KFs o-X7-M1I6o9H=LeRR+^泒EaQ2iRN=qz 'j}yIW3w_QvTz춈3y7a46O(G*\;ʈ&R[m"yÜSh8hIRp-l{=h-~Ü{ փ˘f ,E'N϶`s b@ uz a+k]*/@b`sםa~%yՎ+rGG@#n[5oIS{w4z}&󫛫l<=R\W\ehQ@q(!)2or#,Sʣ!e>ѓ^r;U' #s;&$tH LQmr75"1S2aI[ HB)g)K(22P#gJ,4d4kX:ΞrVW f5Nc$gAr Y2\|H;E0@ Bj."Х_.{O:y mDLCdpGg^d1T*CΠV{;x?|ɔ?PCPe(SBR[ϳk~{4ѻ" B췴Ѫ @Dԉۨ$J ,PkwRptpT0_lj,|ڑ$m7z쵝M{w/|3GM iT#H4I M}@P8zSyBTࡗ<"طE u~ _W,rW ^ CWIj)dl2s  Np4)^<:5knz&U:LaE'ʊ5Џ&a.R,\ 8vv#z6./R HcAcA">J#_e]pc`!R L俎VFZFw?e{cPlx\ʶB@)oʝ+2jq`?}i2Q:\/?avYzYv.dxh@QD!;X"2&E-afZK=cBYɸ>溮8ATw{Mֶs4,(ZC#ո Sa9+{t죕;߃F+o]̘/DԶcxP19/Hܱ  ʼ^$#quWxMZ u*JDeD.4Npolg<; LZQ[%Qt;7r9%o6sz41-'|oRHNGϑi5 ϗ&i>j.%Dl4&V'2J aCHo:>(8_X^B3p+>i-y Bw 2X)(P@q%@ ngѹcPȸG0Yu^Xu7 II$¹IAq(UR$yL47g'a|zn,մ 9UP.rAF+C)$搨^ I;Q ԭ e9l0̆C㦚nm Z&QMz`S0i!C~^DWp Uj`?!Xoh6bp">^f!-1)CAQB9a3rp甇WUpm|Q-c\9R@x*hV]y'S2_q$Պ.Q69:&7Zr_/CZ*Vj!-ߖ/o\5jL}c`6jy8Fi<ܑkmR- VF?18g>0bU?e=Zv,A8r Dc^Hb@d:^X D$ޗJ k$S%hD&У@+d w #C$(X2 U'E@-.N3RJ4wi~V'X:Wr.m{i9rOv ;wNJI. B(ձ t9ӸN66xFDNYЖAHQGE9&HBhPP,yBd w I6:AI kU@L֨z4\C$Ec8 HNO@5 21\;7a:GnoܝVIʺdBTaY I)%%[t87t|$"N<\X)J RA4dJFbR%xlA|FtR™Q >9cv!ÌF,蒌T;oe깆e|2f-EJi0G 0M ڦ X`RLjhbwʰ{{F2=*P)d74^۴mΠޑDk,MpCt"X dx!w$ Ia#e&zY KS mluW^'1IPDRBHZc _ki OcSp^xfùvE62j>K^!w/+XqX;R`8 ZMǰJ'Uǻ6xq4Fz!RG:HG B@l᥷Eb G=4{5饮2֋,͐^Xs@2D'RȣRFe:%*EKZy=X&U{Vborew|;[K%6sX{$֎x>=f' !(l:b6jIXX-EaNe A-N˯}H WSpBQ4~+ct^YI(vcU^.b$/P-W!F!p&f)Z(3LI\"q<b,9A]wuV v;ȁi;vɗeXڦUQ O{Gbʮ؛^_h/u 7_;v9 "7w6Qւ{SDӹ) Ν׃=m}9$\Q*.9(Z(LSKZ l ou9`>rTUA&O/v`cQ`Ba!L6[e  P<"`mND`m 3`gZ"+LeTuHEkp8Uv>2wź  <{EK4GxT#qrPQMBvEG<M9ZU[#},y ZGB sA,W>~9ֻ? ʆDU'*KY 4eIRI2(cj*}կ(9֝8^[f惒{!&8MRzͫ[& hnj* ]-vv evc P~U>Hv'n']?8&8b뛷y!s- |ç/E3qTu8}-943?NaF;yaZfm&۝U390vb Xw h Cgx6oI<57op;vYq:)B2 EcNfQ5d̓ɚ\TBF y A]@HEqh,Fvg~ pIK Dmq3iuޓ.*'﷗Lb^\EN@(`M*$IN5i0v)8t5xj> 6*OdךmhJ?zl{t6W}WFܔ^񬅌xˏ4N,֞L!6i/Ս20^*] 5?jه$"FhmmgltgQJᨹ9^ZcN5y?sЌ1?+-/ jO7l [N~_k8S`155=U3h>Y[\sM {4i:OS+ G#M3)kj_ : LkvU+ZZ8ZQ;i37DvbW]#!-u83S/d(=g:Z)mOqqYzӿgyx>)eG玎M\]LOtC9 Fi0q!wDD*kA"įlc3a`o>w`Q+c"N&si3kFoVPjzmڃ 6BUL8sb/ϝ0QfqRx56"buגl Fe'Ҷl軵*{_5M:Pޫ(5'D`ǹVJQFz(jpNl4]}&ic' cY֕egbg0p';%L{ k&~s,ddMHp u_g1{[z-ϭVgJmu鰞#y[-}9ڎs)}{Bםc(#7fy"9,CLn w뤙[{/m3g733| g@I~4=;NO\dRI ł䞼8w9d?ƌf83[Nw8 L mی<%8=&>bwV ī1USpysL7Ql(@;>aaM5;× ~-=p{nnDKhS+NH0p=n ,\l֚){ gfV~k|X## fmXFXxI261~ך/˿_/xb|Ii⟟rwaI:ᚹ΅5f>|ÛO/Gz|L6X=ٴwٚ]qfDZ ,g*2lH8^-&w&#YGnt:]1cً1,wg LCa2aLgq;9$}q]\A '9 DkSvr6P_`SΊ'߷wEo }scadu7wmuy4YG0P?Kԝ7 o.ݼX%~}:2<%ˊ$fM 6\KΖ8eQ5Pb@ZhZŧp5Pwj굷;혣\f o$Fuˋf2x߲-lpfz(AeN%> AGX {M!X dvg!^DYI$X’h}q|=F@<Չ%*#H KGB{?Xヷ)qE|5gjUA%28C(e+ybOluN>.1B amuJ"Kq  : Wt=B53gj/aġ΢B3B 16!JyAdԊ-NF8uc*U[۳+\"1bҘƲ}FA\U&@@W WR,4>Vф$M|`9ـ&vc*+j*IX)!̉ll3B 5~mUf=| t368Yt`U7ԆL'(0Ձ;m3! i`&O|jo@&%gR\qPW2s#T!b/<$S9Ҋ$ p.s=F@o} NIMEK # vH+2A opV|My͓|j/p47`"2ā}&&gŋSgKG6eq< %FJZ\$<^D,$(K"B$]8S# 0Xh]w=A+s39(:9#Cceۻ'l-t|C7)S]jͭ *F W—ZFL&S[j2 a/QgƱ;q& {x`/5>DF1l=:=}@sI̓7\YWa2$>Yk:dt &="^eCggcO\*|6)`:İzfcfZ* )Y,T4ZiuIZfʱ Zc,Z <VDv9kF|DdW3K2H$p@d>2wź  <{EK4GxT#qrPQMBvEG<a~t#},Yylu읖ాn7}^v^nm::!yK.3e6'JL-*)֕c0KVD$@@!jFd)#ϻ5wԾޠdBb&'Ψ/;r>PxI`7$EU[H %e#T!ڙ*X[^&AYAV# \t$`KjydpR"^k&D)uy 5Fj*yIR%%a2+cPxlmC貃24¾I Kɲ#T!^کmu ^PO F◕ghtr~]0!Cg=^H&@mw@@izdYcv6<뚽qUL-X+K{6F DžM/D(,[ ;=V϶Z[LLiJ(ugBw=Bwq_Y%PMZnCYjhe;E;<+ɻ]E=M}<37~#[a Yd=yDǼ`D `xxL -~IՑuvQ%K/1KsJsGփri Ͽ1;˴n?CA)l͟荫Oޓp\oV#l~wۣyϻV_V_j-4oݒ0ݚ^/u5lnK9˟K yKm#o\%wBoj5'Mר"&2'(ܰb xpiǹ1-bփ'q"*fLX(0QTB'k#Zh1x%M{ T(CQS.t_Pdɵz>1- o"+!TwgcZh1x㚛Jk,KjDL򜥵Iv ~D -o"kokAApU@j) v p[AB͢wʒdb` %cR?# ɒS"j /9&B [3H1f k^(CiuL -O7S @'BTy0(ڨ.M1-=9Rr dn|n\{j(Q݌Z]6LōAf-m+|֭[i紶 Z^\ؐeDĘvɱisZ?xYtb% BSԚKgT6l m 91-P'Ѣ##U0m-p1-4ws̕w r=1PhT`2z^9VcZh1xEkB8gĬH,Ro`P< ^|xn8+z7,B04iW*0w5h`dz~ ,ZIUY~Css^si|bb$/4)gep,{„'f*FP:j$E Bй'se`wcr̳K}LT!:3XF,]Q\t(㺹>~̨xn9}]9ښ4y9!1qGv(ⱒ(W wys(zW;3vY2Q.lXBcZ섻%֍V- {pڶkr߁?~xԋ8v/_zypg˴̼آCo&MQL zr+H)"CE*x/zƼ)x:Aj BJ3/ZzN'#8_a=BoL:paxӜڋ񷶙eV|OF@7WḨ8B?no&$fi$=A¼ V(ͽ]=(4Zvg\n]r^||5ˠ%DTA$,`#bP "a6U"{@mME] "T*|R1"I0 mMCɞgZUwvCU;NU,W[+pmY}6;iѧJ9J2A!i" UMϹR"\ͭ(8.#HB_uz9!54s}}&' gV˒u@x; `#0# <*#vz]q0(2C]Vκc߉v-6`R hdɚ fAC%蠛b?4ƑBK[2!K+,M#'Ρi{:\6Z`o;R[G shBN$ u ĝ]Pc!DꨱKKV20֣ +İpǭvץ?0Iǹ*N9ll^=f2iaÚdO7ٳڲvMɟ805kRg xUY dtEi;|i`ZqN >=ȹҬ;X:J^lh[yAn +R\AS͈9ÂgHkw!4$Mşko(Q'ߚoeɆkj98H.Nޡ5RL{a>8؛&wS.߻N1%!q1 &p*o>y-6#>ΫХy_>vi祁\g582;}tŷU㫙[ U: `|4"8ִ3A <g_h.7_^TkY~6e&?|;ZqhyOXÌm#eknm9F - TyEɉrM!Px_h16ງ@]!gXfwr$wAZiid8 :FS 3MgUﴙMz_7lkI"1AY`Ggq1KߟN~ yf`Bysr^L{h߾9x*yC|P~Fȋopq :m8ťօ,AMGB?f9e[".DZ%4E7DJx:,-7J'!xdJR 3)mf5PBp{:IGi;G:/_EGK+(|2Qh+M)EA(1a(BcSpx9(bZ`BP;zi=Đ2P2lC+Y@cp۬;Uyz2=J!}A2>$k&+/BrIp)J= \ۥ݄ S4ҭ7B&rKZN[@IERIb4>(Mn@딜ӼA[9KH{#gkPOf=vPEg!z݀/|^~_=$Ǽ2 mEbQF70 ?QS'>| SU)|lpsںf;7A/<Η[#'<*˙?_ ۹E/4 r瓏azD#F:;On_;w5 CSJ[U ,cD(}nSBBP$FhbBx^;P5z&7ޏG].V&l aN ংpt6f2TC2A(UeQ5q+מɿt3pǣ'%KXSL5fYsJo ˾)"$Hv\%T$*(,f /A@CA[0=t0 }E{ ,SR`WhKk2{%q^SjoL9S0{bJtF,QBI V%/1Zksj+T #[mHUЃK.KO@6V=YϜҮׄOE<KCx݈^Jqt.eSt^:h6OgdY%n,?'C{e~OD7oz"KRWo ph;~8<1eg9ltABpgQGϊЂB:-^jX/6K@;LR5TT22OZ;+KhhpŽrWw5[` vFusG>[yAe!;|zO.<-x~^4ό]™22"j\X-JNB)ݫwtW@o,bvM~N^XͶ#'BZ΀.Nɻ8y']w.Nޥɺ8y'ɻ8y']wq.Nɻ8y'uq.Nɻ8y'=b%8漋vw.Ehwҍ]vwO]Pvw.Eu.5ЁKnBe?*BeW ]+TvыήPٱ_W ]+TvʮP*_sƀ uʮP*BeW iGZHb\EV0j@Js^f-T?&qtLB>c؜lRJ2[.5G,p!ƃ;i6N&U 0eQD㨗#9BXبUTX :Uwv59ŵIPϷd-#Ũ:aP.u޲.d]Ed}0!;&wDrqiQssy.#TsVgYNG.Pr@ r0DyZIlх> yxdN((EIw!Zڮ;ލYwF@]Pr"km8@_'\9 Yߡ+R$>C#r(ᐔ&QUuuU4DZxPppi4^5S&UPװӧ0޺-bBP `AeHz}*VcPMI->q:1.SR{-<EL(GF`M֌bg81gUy1x%6C2sL3UCi9"b[rg +C Z"E:mk."sK5+1FOF}O6|."J2>7^YJ[ R)Ha0 .:٩Gcv4&!jjX;[ >LƳ7/|O^C5HEBBlrd+[T.46+ \rP=V0`I{Hjo) b}:("$TNh9Eܻ"N*ƹGp)єy,q J7 펖S{lvk'R9/{ z9|XiywLw9?jo=;tt"t O-c̱|:IKV3|%gS|҄ށFݻNoA iQA@*+)7ǿ} @9`sY04\`D$To:mIDXwNւ`! NH SpCFnW{4pd =!"}*I S"!&"6s$# Ǜx18. 1ښKt*mߜȿ!Lk`HJ[vcqD`;s*"˙:.ܓUjݚR]bǭTϨRoRše!S# 2g`}DZOX5W!)$Uo{jx:1"X~%h1e L0#=m#/l ^Z b=ࣃ\ Jm ebeJ (t8eP? MGUH ztim.hyꛚ{TaYi>=f3-}k7'^'xz@Xo ?7sTQeM&r*n.0~Ts}"MnBRk}dm N}7-tN{o3?c%%۶BW"PYoɆ@EpXOaZF0  ￿f)3^UړoUazwo|Q])`gӇ{Aon&^x،z˅B0YPx;t cL X`03`>g *ki&"MQx~zfc7š /X]\6a'8+;mLH1=ΌoZІm~ށ'1=…#~t`xx?Gvv3/_=^>b ALa҅@$E f 0vC8vX.'ӂ|>  $sj<<8d#w4wڍw16[r !%VO܊g>Im6 6:OaGl0>|5 6ۢꄡ g3W U(Z'#>][-r ՟w[xCN\@9ۜ]]|Yc ` 7"ME@15x*_e |{`νz_uϽ?!.7{ׇXq7kM2$5d yJM|4s3o`L dc4g']}~4u(nW{-hd?_Laklz#f82rRӾ gZV T!:Dr D76vxSsF8PÞFPܴ\^gugXg&7hOdg~qF~q*D-nVViwJI"*&Gs'uD*tҀmUp(2n I$InӚKa̝ _U!>xlQSBa "qo4=*.ͪi;ۤt/=lzͳ5oņԊbcmʨ]^T`PZ+ߕ`Ju%]W?BJHJw%]WT@*Du%;(ś]W+ߕJw%DJu%]W+ߕJw%;y8m1xekOx؇|7NEorD4\z:-xܹ,L{*ErR:4+ 7[D8 IHloPM"gpTjϥ^ă:t+yp̲-Hzws,aeq! 3\ ԁ{KCZ%Z- ƙ4,bLRm²OC Ya/1Zԡ7k5Yh{ܙ> -7b{&\]5ʥ=ˆhǸ3JEc}-p[,he RY2M$?Ɇ:V嶕MJmO}*W|`-[yc/٩\tC]蹟nGF*8ܛUdEkG enn=/t"OfYX5oi%Yyoufbxs> bn=O_rgQ$0Q`&'pA|P5G*e"|[}NKjLЭBE<(r~7տ%ƸhTIsh;C UN4Tb' JsJ:J͋eR0+xF -:e;E>j9 œ+x(Gk-wgk8%_2a˦ h8oJb8Ox!L.֏bĆ1.m3ESCzIpW+ qIXJ5t%2z76ͱ9vMjR?VLrJ)"<\7;˜7Q]&!L/^WbE,yP .)"ϻU-BWbTK6J[~^󜑽H>+G،f̤o&wLj` _47h/KNۨoNTͥQRv'^{Ь@a  'lp)iB]h汋X.4U'PHB-V yXf3k1o5lʄ/KBՍ> SS'Dhx&!_YTϔs67<;f;0$λRgQ;0P;fyg PF1щ F".TsI"Xb2Pdj`f4Ar8ŅQTPN()yaRhYm!M$i2#$ ReMg{dCϨ)7%+`pbأ,Mr8Stŗ{>\ij|ʴmdχ 1[XI>vҩZgrC[t(. ?)p v ԕʀXmJ)A+&,8r%ޢNsE"Nn<1((n5`o!D &plRUAJ xD_=! ?6DHD"#YhzH!Uk(ӻa=kl:5Ы^:&](k8%!"aorQQ#=  Π 0j!ڨ 4E熺>h IŜ EmdBO,CĜP pZnOZ*Dݎ>c\Zx88(2R-P8L֌bg810}h6O*4] d"|ICiurf"ٰ|%kx;sL3UCi9"b[rg +C Z"E:mk."s{%0zr6ה|as嫡tTQ`(d6T'R@øa=G]tSc f`D;}C5-A&`u@UO|IҫuO4סvE7k]eVdX\(Snfx"?{7~̊_/fԋioz9u* $1T$nR$]yź6b4IIc%nbρf_8ҷ?e,|`57)Rwih=Xa4KeW7&9R9a?cyK-< لj\zRHɰ)g ="O216lc?x1{i)3X1<|Nfߛʢ D(Xn9FK17BEXFqheѰ.8 5 -,cI`))ǥsJ&pAhm$"@Bka䐤XD 3zuCP q`;w彍Y} IA_ #SZQYo5II,JMڒ nVnn Zw֝Mb*= 9Ƥm4J3mq˄6NxNb&.`OucDI aj!FHXC؄ A2 ƨU1kQRRkYahW9+ dД 83 B0B@Ep.%;EFa1 еg/87لb@2h`RSUՠ̗NC>20زio:Hz sZ1KF(kzaSia1ޛ<9I"kv022ѳ?otS!Pb^dye+:yW7ϡ(c\s])|y/i?$x⍣ J922,R Ƹ#aR e ±%n&D~Ħn<!6~͗aDj6slf'I{ޮvǻ{3ƹ4 vkoho+hjuo@)ۏ}6ksk~%A\(wp)BSϘ8P=t:M1g)iZ+IS?gFpXSq^Pt|1]L8foϫ#bC7tZkRTຊmkJPƽA?޵چwo m x~%1T)b+t@ ߜ 9R%K@ vou&+97Usui Ҋ%ΑSI`&ͱne}!;ZE3`^M*_XS/:kʅ@+aR/q/xBJ>z)#"D $EV 1)'֦z Oj܀Z}ǟ9U>lѻu~d7(k6;d>P~ޙ;y "h@QAsgѩ-U4,%! j@ RL"\Ǔ Ҧ(x}͒Vr1U2͘zҢ kZwT>z1oЊ:O muxCo8VR4VB)v:=Ek]+hv4\6t{>EhO쭺XA\T |ObNzqQ0Dޞ筬틵 !Пu[NGg|~Z7xvz(ڗ.mݸ*[K" Sd7,6M!$IAV1Z[hڣn%! `b4B᧡(b-0aj&cv BJ`;DYO0C `+혭CHc[q>ϗw,ּ2-$z{n hhm&=&b%V${)VVrtlpɦI݃WXyA 'B}guoɪZyTg9a2XOO-s۫:6SR9UnCVdcyopL@g;E3]t Gf`6 ]a#:8C, ^17d/݇əL:3r5jqfoݪ:*҇\heFe^>c0z^ˍZHѷY`1E}"a;<0[~yUkcrMO%>V>[p5yEWv}hE%zd,~in6/RB6oaflm4Hx/PKg&S}o\>vo%#lb=9 BA 3MK8P4Ao__ R}@{;.dwg6oyZ DC= \meވ㦽46*{\:3 梞'NNLwmJL-U=Fo3Z3Q~׃cӠ!n|*!KF^!J#ZV5 Dw]5Ƭ~W0ޚ|(ax1d9K,C`14G߰Jaqs%atMݬ~Z3?hTЀFTc:b\SF6P?>?KJܔ~..NM~16׳խw KAD+?wa}o0P~?#/gp/^f@L'CG!A5'7]|d:yWn8 &2& lMw̖뛘 \Y7c p秊|?B}Նhv=ScyR60[*.֖Nx5|m}`}gژjQȇG 8 o :IM)$fҎYrp \KR\g۝}+w' E?)xfj-u.z Q]K VS6WXPQ. m=`5qXq-=V|>[RBsbulT&{[Iߛx:0n-!sKHD T!'zJ!ƈ^)2Š8Gt 4^u0rӻ| tǠJuJwDB4u dNRaf hƝVc  (BG Lx;U >\t:(떲E,tN!2?n i<-uB9#aapyjUxW!skjH . ) $A $'NӘ7u4 P-$B7aPnaq;Jޏc+8A`/RD 08eϭQ!IH=Xu_o<_vp^\L'$9FX5geXBb?sPd-I(|ȂO`O҉3B~ɾ%rV~t|>%_DeSb)4ך|1<@ϒv>"Q̈1R`^_.\ C >3$Y̖?s>)$4+(k"%tCMLnICl,PXΒa>AߒEH"N'5E5c|(|n\k`2r9A83g1s{-cШ=ǍY20uT٪O4Yy5~_пN@Q_hC81"@H]a#LRP2yn*_ S=VAd #ub  ˴؈FNܯ(DȖP`~=96G1GyQO_u>$i%CxGŇ}S0PIk (do!PsLL'Gs])|tNu( G^93jD .I+j6|~RnD|8w X33&av,8֔:: zTm1^rfI`yI L8@*)\ NH!Ћ@+d3*s Lpv~~krD\Om2?NE;/i@tw8hFrJp , Xi7 skEjN2yX͟)Ǒ?/L&:>BOC%xc{NkSy;Rޝ&xTP04R&c TCx e r-EqC*g "뤚zwUx)؛cR|U7qY5-_5 ʊu=d'1WZW/0#!ujIP8K$6$/I6[&h6_- L<0?wL waﰼ&៞QT Ų8u|uJ7YB!v],s_j\xc8y6m?F&x5qy}NV m\F$ +;?2U1vce_m= ຿[-;vE1ʮeN-z=$Qʶ^P6\QlWè񼚽=?N$ָ"H{ .>ʖAhOW`KOoW%upMhEwi~?cr›Pd(ux,ˏ~/] |1Tl\~oCiVE Z]sT+XG]+(ƺʼjW@CwV$NnRYNtNTUqv㩶4(o EÌCQ⤃@/_Q[ [rm:vg/Y~9xWY~$=]7QEpp]^E "MYD FFK?BcnRcT'ϰk';e[̞Уjҗ?:HMSłyl0 h<1tY!Zp mkJ8T+6^Ԗiԅ$!U,a/'%==m|-QӴZVYfO.8#ϻX|p,L>fģWhR{Ç۬^fR\t T0 \g@E"Yk _aa2?sf-8C.:@EPvu;?|jRꪟ*pȥGTk8ۿ.EoWͯz<}*=Wwʻ;b99YJ<_shyѷko =ziz۠`=3: 9bj-'H}[,VR-}ò 6ϝ*71[[[JM)zGU6xqܣoѩw9 Fa3zn*j rd _+c^8IF_|b bzo;q;l?Na~kS68C̖ڞThpL|yoLe3?a*a(7s$9Z]k^ZeOR*}Rݖ zW[ڟfoS=%{LVOwop2LDߨoTi*Ffo?m,om컻\evFM$5J oR.I4ߐrhhj.פ/Jb.+ލt^ G* l!\} L2'3Tkn-p]i7vq5U%DETJzxyE枈<""i#A@FmF5kcїe충]ݧd4wjvsP/U>o/)U8ަ, hU&eDl ұsgVx(O\̗ /8 /8ȯ_,+3kd>$ AHQ&j.L". pN爙Ecې+a':-8,+m| Vq 3)2eDC̔E-PبI>d٢`}bxWcrtAШ@r20 :Z@6F2g (B6xlCв㴎:N{JN{Lq*iQzU/"yؙM8|xf]>0omOϣ-c&<Ԗi[zY)|9^S;ߥrr'M`ߩq̶;ּ BgfY*tQDrNːԐ% ZDm4ʔ|$gVauùߒ"8*aam᪴x<)]盔x7znj,.v}W %LԧJH< I9Y*sh2Yowkͦ_p |Z[Vlav͞N$_MM6#A ?1 Uq|5j8N:gZ8^:dNjnpD|d~\L\LFqZAiӬ~lZkVKz7sfCoS#={K ĕ>ͯ[ UPo~HXVW\)#ջ?USr5˄D]XUCtex5Z u#j&^_x#2t|ur>(.dDaӞf1ڪ#(W }ɊBG; _JSrM BNKٻa)A@z2睨WzXnڶx E uVxNWܵMg 6cs?ܫZ 5$vI 9i^tm-jl>{okZjk[ $j`G ZyCHB8g})FEkē!y8HB׷$+ (;;8.>/p!>Ed=F}k|tQ=Ej{=OmuSO1*1kXdRnbkgdvɮ&KPs3LglʔV ws2LC;\a!5cL<mi 1'sH7;$5Խ =_3nZ?5SZїoT:6IPDU$'N YЁX\D)s[z8$"+ fhH],?e|pg _}}l.BqY6x%fV l79"qԥ$ Q&}XMZ& M&kPч]` EkA^`&yp%Rqtb3Z3I޺Kp\)qKNˬ(LKZ("j%q;ъrGTu]g[ غvZFbŠ -Zm5lo?DM1Z4"!*dтEXn Z b̒Y ;nk/"Bca]i|mȶgZf>3cY2րt@+U22l(\.HWd-EI^1GӱG06[;Zmö{ ^bԂ.K9{%d%+لD~dmn?#e|=ьiMwz|DǓ/g {uGq4y[v&U~)4:n.GÒW]7IHp?,|r=V'kd0hܧ-j4nnyt<-]0O`n@:vX'nR枧lJS:83PVHKrB\VfKcz_jZA5!vSбz4QAC%vu*Cmw%(]Srʱs䴓̓YG[_l$=j3v1SuU|t)- 5[k’;ka"/iޓYcM!*4+=ᖇK**?.㨪nsհͯz`_?ݦ!yCXnTh Z:+n=ԒeNݩJ;MJ?ФG΀jݤSjj2KnmWbZ"TvC]ADBl綾K3ݾ. Bk)#dEL7A*cexI:QaYJ:IAO ,9ŖȦ) rq<'pPzA燔rom,EQ1M 8QH\mKc @x-nXifN<-#DAWyEるGY1d`,YU%bh9xTtpn##p}?8A p X]&a1-f}[1Zf2)o$bAJ#p$N"͡H6ib-]F&_GВDiy@N*Q UNrBe*.D=6\):9ዢiG|)C ;Y!-䎎f{ୌu!Et 6JPhԻV()5< ܙ!"SܐOx }pSϤ&0Fl~uek Om94 ߛR1hvj+/a 636uHh]ma8C駙#-ɖŏOГW_bHoݼv]p_}ebwU[خO\K_7|:t\I\;~xjcϩ[ӘsVde!Ҿh'9v,g2N8U⼗F5U맔Ӭ,P]MJ.I$Or+0pͥ|pMw!Q~(Շ,bmgWW 1"GN4War [Q%Yﰔr3! e`fJh=Gl.;hٙTW_%_c՝CV&vqUz.3 2֣?r03SffBcx$P()Q}3a(F~ _[D V@y7Mw\2Ȳ7iPlU~48IVw^_m]f .ίc*O&<2ߠԒpee#+L&פ$]_\4b % )vCq>c|6YWH rZ{.\/04fIrKW捷)א%69o߬q'+^дgMH Fl*2J-Q%zj;K.D]W*/]oW[ '3aHGg:?L!p?wDEYEɀ\;a1Bk×μm4LZBӄ.,:kX$kUX[fpIJQQ6Gz1%,x#Qj^%rֱ$IO.Xh^ ZQmґkϭ41LB@['#*6+X)hT֭`ow\ieMr8]+FPSCeGF!fv &1\楣Iz/.RjIWVƪ Q` u 0rUw"LGT)_d *nMd\t7SBr^iB݌[ ߠp .Io Y,}KgY,}KgY,}KgY,}KR[by.fpO>لo[>/ ^F&SP9E{UHTd&o_iW8`E&Y˫hn2+m,_y2m8g4 7;*#sJ g 0p΀W꾍S6N-*HA E=M5KƽNs2zJOLW2$,Q{t-zC=-Hn1hϝs)c@\fZ &Έιʃqu-_;ƌBJw!%:JGh0ÈhjBVfA"YRw w- h[^۝A;B@uJ+Շ;Vy>Qu]v a탋iP8/!(;;,davdtQ2΅u0-IB8TT&\xZ?LIExb]Ŗ_ \BWu-zxWc3bW{GZo~*ۅT6P~!WcIMY_=,F %V LHڅhpԱeF$djCmm"X\o=6{q<6[7ZnmY _1K1TGOJcqoHN;kd4Jc,CΏ.K3J1&A<%X|qFϱ˩!xv fMmJe1('Ii?b-WIˏƵ E@/' XyK=k.,?Y>f/3ټ-BU'^&}n_LG!@V.JAO2ԉȯ(-GpҊq*N)} >n=n}糗~~fMy9*#ZXDS؊^{-TH14_5A7~ Ik{염G>|}ݴ>gk187?~-ZgusmA,k̒,q*qC$.}b"Fr$f Jq&JIaQ娀{z~& ?|m}E[~Fߪֿ֤{dnAfe#Wo/ gZ5/L-,*qtߢuVn/i[BeLmNf=z)wMRnKԭ8]&f#H/\}ov|,߯i^:pt; xIyQ : DŽn͘Cb& :ur>ųytXENzPkFmKzʋҢ&[qR7 p2\LXsU9|3P80U|j$V ibv.2qdCE GssQ eEQCE._'`$!pE`Y:&| 6S+9XfЇ\(DFqnH`KY4_ځWI_ aZ}"WVDTuWxIEDj[bn)wPIo8zgՈ۩dLNp"K!4I&$ӦjkM'6DB-ME ΠKtOZ#U!2+[N'k< JB;rpx*[u6֦9V|Bm^E8XP zj 1rq4|<~Gb0z3~\97׶|41^.Mlzz"OX~C-9GZ:FWԩGiR]H2Z9V^ۭV?TL8J+W3[IJŸWzeM2MDS&<\#KKH@Btyl\ĺ$\e_x9M(ǹ&K30O^秂,U/>h>eer_B@NSi@C Yh}LE@mZy1xڂeIG+wXR-4آ\rp `]rQNwIx6mvGz#aЛtſy-yΉ {1]8jz L!˯]POPTv_sD- @Ȁ|Yœ XS"א'D!;"x* hF\JN+hM(#c@NaZj|Z.%eKDA$IJ^m4?M !#zj1!15~4kXpZܡܜ'ڞprJ4ve9:3(uP1c9\XGr?vs!r]rKy-@r'|'Yбu6gb)%s^MLJ{s   %N{'(E2R#@ kTR%03!'Q@#g DLgv LZQ[%HOpSb47j|۱,n7h|t/9hk0n$A2-!B0g19DQG!9$5Y_ax B3lU ȴ)mn waN5*(t'QPԓC( ƾxUc0_`NO: M @:%Q+\fQ&'E_EQ(REr4r: dO>$+ Cya2$LTZDA7P[Ez1R\`uhTfU&4S(^y R$coS3)dOn 1gD ۫*EpsXy[YQC0b]w,F?mw+wNo'qYz^katQomf?jWr>?ʍbu87l%%"_ϻ?<=>"续^ly2nU%/cƫfmo qb9ujGNoB)]}QGUnf|iU_bK C}%Fꯚ(nu=Ua5 >=oh1rdp"ηE$7ܣ8(su>F 儝MمVS\ޞ534rg0FqHz!< ?\a՝[AyѬ W&QVt,EY.>! ,-{qk/sZmo Ԡ@߃r45>erSw/]3s6ј—%B,Oq\{H Wjk&Vkz "K˚nNտTh{;cU?y<^{&YyB3/$K1 @/,"WK5WE4b" .ٜ 봹/] c`MJNˀk4!^;X 0J*9`i$|Rܗ;|LtmtΪs(ITNC3eUO7Qm ERd[ rKo$A0uJjۧ)XoWz 2D'Rȣ&mPSR$*-z/S%X_aۤjnp?XvMu<>t dKP&^g\v<?j!,%@A[auI0 1DŽcG ߁2ӣ0ZS1GD5{ ]=\;ʈ&R[m"yÜS|,OeKms.Ar9E?yep`=o iR^89t㮆s0 >ڗAkO355;Q|O?2UE P]JH[\4˔k\e&o(DOvK4Y/2GHOcB@GPT`&IqZslQ1%A$rƙ 5B{4aI0(,A=,I#I@պzwg H=;i{?>0$g\B'hքdӑX6re9BXTK^QO& z">ƏBϼ :MT* -P3f'?i̩֕s!u[hUD I"`mT(wF뫃ll'UW3x'uvCu|ҠFp$ 4$"4A F06 CLZ&NIY#ЀmEDpl"a>DžnNF*XV:nJR-8J\E=0="'ݽC GcuOnOE;:jX;G 4ym_6&{|3iP#u$!.c_2.cX2.{zwX]X2Re,u?2RZ9Zge.?vtC+Ky-@r'g ,Xnuu6gյ~ge[^ zS!n9FRbr^cD=y)HFjmQjTR푷 'Q@#gΜ.mrPgK00i5Gmil]xwW9|)秔?-EoDzlVA,Br:zBQ$ ݭor6%Dl4&bV':J a!$:Û!Y\M\4IkS04T8MFˬZ\'1c+P( Bah6EY77#W!N;Hs:P$4FS@MB!Ѽ0v&*T- (ԭ"=&0B+a7XD:8U YEp Ms<Ti)JׂU&)~_sF`sU-oMg FMUT7Qv݅ݭ$Ru;-տFFj<ZYi-S(VWp#zFAZRb(z @~.'6_J{^Q~լ9w=0΃[Lss3Ǡ.`V)$wiu.\Fyi&.}Yi ! NP_ {Q3J[]sOl~nX¤oO9B?0[<z#M'|[Dr|=32W#aPNل]h95 \Y3C#w}C[/>[kAw.#p!V݉yʼn Qw Z|?]fr—%B,Oq\[N<簼Eq}E5(ku;ڙ+vv~)"7j?8r Dg^Hb@d6^X D$ߗJ k$k)4b" .ٜ iu12N5)9-BZtBPx4c9.(ޫIOKHIL&Měx%u*,#TuTq(/9/]髯Yz s>.3b?o2-meH zZ>罥K$'y!K:絖a:o;#Q:.0ҡE:,F7zOr%_6=6(zئlōWT7TUͺi \0q,Cde^y҉`ΖpyVg@+5ʴGy׏yГ <-i1ˍ pSyv_UFяV7c<7xVIh5{>5 %?.q•l[_8t,LJ )G)CXW3Ӿw{))wmI %Xp{vsK gS&R ݯzfH(>FdS8~TWU*S50OѥLIm'I8'i&s(%ӼB[`4G"`"RS zhʣoLD:36Qָr2{69k\Lf]-fݜ(t߫|[iU&|tw(5`aQ`9 LPP G뤧 G05rx6 gt<;y-`&PfT3:ڀmT ԀPErg$6g9NoS⦘ų+_O?ݰn6ElW3’ l oGxc%Վvu;{2E1_-8w$=seDHp?tMfv>YXZ gtc*> OSv`DEX륃+5kj J3 t[vC:;|oDZZ9ڙcULFyVN?38`AGaJ-*4k8ɽ3mg,`䚛)pbb (Lr)M*j d)qn+L^A3|QB>{;-FT叭+'GXZo_PL!wk~[%+"|`FhI>cUEm8Ig"L|fu&) Q5r!,ȍ$pdXXHnlPjZ0#wzf=ےm~:HX{j7Nxd>ZR ; )bdF/cLr(參tݻu}\RU5]nPpy8]:sU42a8'-$* %^FHHLBDN:`|@ʾ" \H3 r6$oXG0y䐰^z{E2FHH5pFBb%#1 L)DXҠ8KFbR2KZ/iS^A(2&xv|;>y8gZvxL_i\R}yX_s닶xuD~]=X_/g8 ΦA`nEAua*9kKl90e{ersr%T8 ZHdNa(Q"! %K9 Q Ƒr^@Vs9qa0[aXn}ohAC7(%glزnY/#%sd eKxf≅bkX&*^0m`鮧Wc'9lg>Lmdi xД0)ix+xU -×7@?sh_Q~̥N9lM>&4<򔚸yu*!bi݄1j6i ݤqh:wh ZuDtң}WC/o;P^o''|_\>עE˽{y!_)a$q9LJVl:y#<`EL8W-}Dc cɏiigCWPi4[{%Ajr51 &4"9ۅF"i-o{8T2TJmc}J!"'^qB6x=&=jSNk{Unz;%[p!dPzx`HTRh-`.A$ Eedl\r9dXv86 ݁^^9X4U]~WZp0%P翎*{%E*E>mЭ+H3oKA#./! (`o~ ~ Ue敥iGϓXML]fv0YtۆmUBYJ֒`RcJm5Pmhv;k3FWїNrWR2`rt ]Znu H]ɹ`1Xy= c ts+fs5;pNw ^nd*{ŎMm:ݎ^1]B?0߆iwaV0#*)i|,Sdaӿ5q2ОqKvIQ=:x|]:sXf6HWPQTU6;دȹlq~ȚO?9M$TUV1,Ra(υaw6\'ts:F[>m6U5]pٻd&f)J~#0o:ܵ{  N83o"kBv"H=Beҳk׻]bauj*J%(k r\Y`{i綱U:Jj^.]Rɒqתd8GRz)^% pA^^qC)Z^*j>oQ.ޭj2]]ߵ&Z|IIX%M1_M}0~8;ĸ@IЍZ U/lKw JN8Y/ ]`i l~=W s1Spk"Ͼ=ok6I'ZS.a\}3LSa5HѸ @7`2wTAIUe$-%Ng!;Kͳ{.wJxs/U=`o߶7.FهP_NR8h]x{#jId`Ǣ9%FH BJ5R$(#֕/:H\%̊"[T*iiiq`,.Fr*J]NJGND6,-3yk&id(R)}-croW&)%])s8X vZ0D'iYh,O' 퓅BQ}>Yh/)}>a,O' 퓅' 퓅bw Rgw7e:/6>)[6\^rÜ@-EKf-Dqrvgl0)iL+@Fh#SHv`bȚ9&`YG9248&w9lVJe?D h$]٦NjLdO̴qI[TT= _NEFzVg!Cg>Z-l}SX"ϛhGimPT̢K*" @R,O] *z+3= !z^3%h$YDQVhBCF(%S! 2",DDꥦX#&Z^Iwh#11t::ܦָ2+'d:m&pbKh{BC/Mx]+f)h,` b,$9A'!hThS.@5Fጒ ИG"h@QAshQ9b0XNPB2C#rMgY@Sr^<{U;wЮ)vEoY>e1frǹekA.[Fni2 ,ůd9>&6 " pXZLSZO"&%j6lds3S_W5u";R Xmߑ)o F-^,JJQUnn37DzOrMZWdE*H>>|C_ v~,?gQ%05!$4n*f`}=":5/?%"*nZt 2]jyxu?+&n`N`NK _{=9Y<[v 1c5k4h1?sz^\U+R9+ @ʒ !) rVb2HϹV>5,=ѵXdz_g[g/ɏI?z~g :?|/j%N̊vXȾOdR F.LܞB9T䈙E 5t@16SGE#P@4dk )"qr@O ˜Θ)  j%r6AAíDPVzU,ݼk`lG!!!H{;(*vl|7ֹg-DRQ? /n_cq 8 N߃_G(čf?X> EM?L`RZK_]ePZ~i¼u̪SGr7̅3u[q.<=?nn7ܪ,MLIY,TFQ*d$^Jk)J>)d .mpsUh=h"v4#'5|ߺI [ــy:`nx ;p5<1(^#}ǘey0}W SLSzIM\Lm W0=_!Im;lMMVOXqn>Vq?1o}uqG>Ӆ<lh[-rxv\r4<Iw~Ē2|i1#cџC<[vRc]6Q4u(_ ׻JEI{poْ>"JDp Ё2 N>`#XCn5>p6L3Bi<ڄDlJVҒ\JFi%*^Bz B 'AF}Ld%(ppE֠d@dJFֻ2ΪmgO`>"%Lgk)0L"D(Sp3Hos$z hhk>5ԄKCOZm`Q( *:$112o)~z넖kCejUZq:j~R=x|-qKId4ˬLKF ȥN(#8(zxWM&|x*wFgiVUct4A(\+:J@QW+I^1GQK2vVC;IȡhwgKZKi&{"'%dВMNl"w ',nbpNCPfbմ0MiFr<@wd | 4㖜C9n$%$iw#=xJT<%~@As1,N-C>ŋvbB\mQeg,t?JB8(q 2T~<ҺbO> 럦9f|c׭ݛowDo۬u9{^W;<_ Ϟ}t:@f7 .bFԣQQ*a;^i }PJaJz)|g8~BSR-tx X0G9#CNҨX`D460CL')mRuHW)N'xWz1\BbER9?4aMė?un3>>J$!3dE-q*0@>A:sh3Xn9{l8%|]+ck,vos"FZwoHɰ!%meD3h!IUvȺ$CNF:֞k<>#Sԩm!\$UcV\{>фp|i#B|U9Ԛ )5V1u(2]V|QJG"@"KhC\#V4t/-G &]&}|;8Ss);'jL><Ԍ! ȭR0IIR:.;:wNzՕk2hSkVmGb%C75B7u=E4/:4)ҋ6 zE v2hK1'ly4I:Qc݊*Jw'A="%Y|%9؃@ĕ6B;K:cd^^QUk;{^9 pmYuOYE1m :0($0͢>x=biǘR»P2͜0ʁ 戒З:(Y.DeiR$7vfQYO@ -eJn N|H,΂FHH"ˡɼ A,z)`|L "qotĺm<@8.3S(ڨsb6& X?Jf}~AJX&%u="2(wq v=6ja~Noʑ =7t1~}f}ݿskvN}/?N/v6Nޮ[u3Da>govbZ݄@~{G)r> (I93^.#J!GiV|"pK %W3!y1,*gr#Go_b___BRrH~&ÝQ2Ru-|k?L?O?1"Ze%}%omO"R⛓rDu%g\2}FJ.?0a2%#\7o/F1i]>2#s&qapv3b~ЅE%p<텂O˛OJb{sS%AI)rt:L'FXPvo(qdB"U"gHL\H#CoI-CK8MRa%r܌pQ"C]w :_2-2\!*l(TK}v PBFYxkk 0dRʓ$K*!a:gU ch9/S.[mg$!]Ħvl߭.(m&k ߨމ0+w"pɉ*ɴ̅ˬRLDrW`}NQ(e(z.HOyWZF%z;Į(FwoתmLDLSQi#*)C6չSt.DJ,ZUSS@Τ@\QT ~q8(>G\ɛYH\[@xD l)iNfO imD32z%ۂ/s"JDp Ё2 .Myo#XCndq a!J䑜̂fHF咖R2L+Qq?T89çz}&v"dy bm+ B8CƂD4IjBۥ\QO-6(Ї]`7? OZPxZըV\s@6)(@&2}t9 m/JR=N7zr6%"V߲Tbx,Xon :IȡͪЇa%_z@t%P~HHh&I6L GK]h\#zJyNUJ.h$߆2r'-@S %+JEcɝugr)Ѵ*&nepurjOO3/cO=}X߾UJ)=nl##jL]jNщ v_@\h@)RV]DP^7tOG; wQxi"+iaJ:v4JvSz@0k%Igw t ޵#"ic~1`ݳa< XtȒ#9v/oHeIVlʒӍ8DQ$O]*%43a7:P7d2s,܃Ș4*+0RڤTۤu\-fwCd7t+ipw/v$LWUe-ʪ0C$ 9r>dcq,[y;J|nZ?!\|wPKg 7$m 4&\qh!ٔ&2J75I w9YkJBwNS }D*tJO- D,Zq0X Dc{"paM#s\H ecYmF) ; -Y&smXm9cD JbL0m<7/uK:HilD3y|@ G%4ieF&&i$KlxTSQM}Ѱj+[.7ۑJxӂ hФFeylD HPe F-5ͨz 60E0 9%Π4hb׻,SJ+LfN<odx^<*Gm}x8BND1K{:l̤tdN =fI砢x7#T2%6K%MF6$|gAJ#r$N"͡I  3X T0%MFOBsFO\nGi ̉q|'6Yhs:wmάV,'|qiqgLq8nGu=ޙLxl8S4}iKpUqk^}AAi~1lk>y6Mi{w7/c ޿l:vbf-aǥ^x=ĊO0Iv;c Zt#oqOa8g%}pVY~v_du!J"ſY 'k θd䊝[#%pSÛ5){o3,c|y 0pr7dn /~/RG-Qpq0~RR*>-S6"N ?<#.e_Zq끵,#/B|%|S-6=\+"ؒ0/_ b4zz^ЇU#) MOzZjoItio:Dgt69^@$m"78r&c^i 0b-Xf;!lhdQn;{3~x"%a楑a2oovWw(in&N/gW_iη?kBq]71YID|#+Ԩ$@K'lls"r4tnxR]lTԍ2d0\cVa:"VUl_]= Ktso"7,F.L$%YhH1faXUm`٧8fрZQyN? 99qF"90 ^خy>xx7GoxJ`#Da@$HΙYE%T^zgCSS84/ &HZ8%nV3dsvIi}(yxr4+jA[9sEt–%XĜpgS:qǞscy{Nd & ˙Ccш &&;Y 3N$4Y!xMvv[ioA;@b\t^\] UÑT rQ&(',>|)gOt9[Vv(`<QYuY@ź nU)tL*Lu͔(j-WS:O{;?zy\ga}]DZE@+wϮ Gtq*F a@Rl(AC>BL+iW`*Zdh&.ӁQLA2)"DQ&caT%Yań!d$Ě$rX)Bf2BnyxHs^M.uv {4BjI,E4>']i>_f^Ħ5pə : ,cN9Mˠ6L,躡] ޠxx@Cy$E@Lڡ0YK s^bCަn+z@R̠ 6 cVlfL.} JڐЙsJdU:e ޠ|J?ux0!n{BbQIntTRV_߭h |ggqF[%Ëlpf| }VXz+$*^jp$Vl$[JwB2?:[ bVm]{* -Är$]BuP1k)I][ H|c9NZ栲\m9(O+r'P?<~\?>e]u_\U*NȒ')lY Ô֜3\2,7i͑gA+JFlN)J9q\ʒ =)(Yp#ZLBdꡔYʏ;Ut~I7w1Bwg0=^)W%RɬhŜ IB"`T4*Yd0P703xFK/^aY ::"3$Zb 4+,Ĕub@Vjp=_c="3fJIZI\&D.DРV *raO D9紞i/+V Ͷ-'9L80V=dmTٲC:6Q&=& 7}iJ(}6p"yѥ%<%ڠmn3ϋ@\ nދCP (gr) FcΠ_B:^!.Fdo9϶h׋s^ 6_7`;=Emc!VRK?ϸ/mh[XFyGYI֐qA76};&)%Z_'YDZtg?pN},_q?<9lT лy φک-AChLڍyKTtF&]3עf֧)vB%e]Q;R( EM1o%I.%ˉ3EVYpP9 ~XYcEhIgN$r0tLVMޛ9rv LL<^:h#l%`+¦UGsb)}aBEҢ6w)s4-?,k(eݳZ3- d|=Z=`>Nnhg\X7̪Oo´9@vR}i/+=`^v*n$?{jѓwrFt -Ig~ VYf~no7WH z#'A|~䜬9_gw]9=זkTԘ u&b^ @,!<}cR!mf8ALobJ{@HѕXJ"Ԑ9oxP!/FL[ ['$t1:Mg ;U'Gh?R`"r'QHw>FR d4yGh?B N'kmJhdԒ+&!;%KョRGy鐒M=o S\B,YIUƜH݂'j|Z^`⾚|15N׾|%VH(ŬDnjkD)BSNd@SPv}nݧ!čMYۨ # 2I cRSSdzlz sηoyn_>p*>$ʜ Fޕ$]K#2.mg0F#HEIn7}#(ZERTR2 ""2*xIAPG.&cP)y0'xwO&v7ݡ>A~^7I7iDŀ }Ԃ8>m`"BJE4`Y`q./w* yZ-0kcjُIB7O$;A%jGdɤ{ygF!^@G119=O -FEeH@nj (8/x}kLJ@7<%baj`꾳#|q$5r S"@cAN dMɌFqڪH`pI{A-sB87AA@?vj9 6I 3vbalH/5ƥ`|F-5(:9wj??*884Xk:'t@gAbqTD)sq?9|vghU` P7ABtHDbv2 ~"zUBÕHŠ҉jUZq>NZ3egw\)~KNˬ(LKZ("j%q{QEp\TVw582{ӓ]#{9߫ ~|eTciE3CTȢ%z)!AĘ%@ PU[퀶; }Zp)Vq۹eV=Hɾu4}o&ϋ6u%:yj?>Ddj)#dE,E7A*c Q'Guu()="%YPb[<ǬY/r3ތ}TH_Ulb1|Rf֌Rc6-KpqsrNnIt_~wV6?<9)ngw\r {04&aA=LZ_yDA%)X*b3s{F)p-agnp>|+)f"1xW3{m|څsO@^G]fם^2ug\e| a>mC1?-^s!|&ߜ5g\2qnxg3W9)+f_Y$9rt@/~$6r[oEJ AbemIH |KmBU=MiGB-{y6R[ Fon F@f8c?mz({`iB%Qi4BrGRkNz}5AP>]#).4gpKvǺUY`杠5'Nȍ܂@Pd̃9EBqd M I:"HD[n6`7Z! ,>N,X 4RlިHVR5c;Qs 4W4_Tl5x'Y.qX}Py*P ekǩpMFQmGUrM7-TFld *& %B̢evZ~Urp+e:h%HH<3WeJ:dN}+W;)v u'ל&é!t?_T2͓/%|vmh i̝!% Λ?Ui Y# > t9E`zxSBZ_cg)@PAv}VR}4dIKʶGZgBZƑiZi׋+)#U0b%8UniHXuN^xOuqI!& /nޔ2@X\IhgMDB=!RAZea ;ːq"&<Ȝ; RPkP͗T |,KUA?m*k2[O?`BZFq6\u^~5e{#9 ~\=S/SO휝UCsiI$yힺzggW `BP4bi;dz=s7 idC#%>a'/RLfɎVvE?>!4?"<|"*o;֤zC vUJq[-_i>5$ YJboZnm~\0Fmu|ukq;puVoL\7|lÖ2IGzKZ5uA ;=`ʯ>Ėa:{f?|X.Z%3ߦ80V xJɗq%@Y#P0KnCBe2NЙ J6}f'[]#VHD_7ŽAȇwr,NոݰnXrޮW i١oS@_C-˻f YM77@:3+Մs`.#"VB+/K7+365ݞJSt񠅵h~wk7ky5Ù=cy>ܲ{33Vo;,~`hib CW STEHֆðYI֐qo%I-x/鎋qEXZ+,5_hGg'^Sy3#*=c{OѕcN+x.Yyx%-V!WtP1hL:{o-꤄t aDD-Od@ípj[ ]RM#1S#|eHŇ??sv.Jy%C3A+I轏1heYրA5k@k*B{ k-brIq$TJdp e-&wmI .#!;8\ X`,p#$CRvW=>D%iR(mq=ǯ4"ģ%:p~9vޛZfy6jw7i8_ F&ecwq0whpd%p : $EʝR![4O ˆFX*7Y-ZG]™h%#3):5JP!S% thD8qϞ-uwlSWo+:D#rafи~sOsVuR?ڔ 8!&kЪ4)jHFAv:roڋ nh18JZO<~>ߡm??,s<0^)nE٧^e>ZǕڲpy=al% Aj?I)„AAф́(4w)(oIBt (Do4ֽkM 6 g?qs UxZ9\i-F'Ҫ _iUR\ |VgE~ǡc<_t"d $"BJʙ:*tnМ%# cJFet !yOtLH2= CAp:EwJ ҒpSDBs&;YxxVE>]pnɉ0{x`ɗ C֠vs7]$$p4s!,ѸҠW"hfNhc6 D@6ٚY`xI%o$h! M&Q#3H&3% ·$v}xcPBTN頢ڤ4A 2#G@ ͝QǶtqZigaG`=gɧתz Ϭ.40n"feЫ11\}h + +m^qpW_^_޿i}ߡ5eQW](,pm).}7՛  l Esm~%s5ĭ7mC]z0}&m^R %QOǣ~zy "7 a?G-sFz!ip0ᅳ%8;xG98d9mVhaWP,(NdBJg5YUo͜ U1Zjn<̄z,Be¶) o)_|CPV{Az=wؠ%넶{-8 R` s2Z&|&kmrM~s.vY(NHbk/M2y;Fg Dt${dtodʽ&Rs}-Uu!9ѸKzmg99L9@ZBЃRMaA]ݬ]3OPgW<"!fE܉GNខ]b;jH3yr u59B%}v=^E>k(S*1H1Xⴏ ʼDZ/T(c\S*CS-e[P>()$HNϑh5*2n$ 󹄏F!!OͥQj?R+Ǥ550-@'mp(`?#!*.3h:=g)}DZ1O+L󎞜YJ <0D''v1M5P4$Q+fA&'/~UP.I6d * O#rjIHڙYĎ2c"C z Eԭѐ028${qJlä<8Ɲ?V<,\-V14n*?TN׵7~'A>S^ZfmzHve4Uv*]>gYgJk9vrP/CuqtDPv _}@{|æIAԬX3as8P5^<;o9]f-sCA]i5xc㈲Kj(A.m8sǕ_"G]~oM.zPM6/B) t96//Zcdbp">H#E?aPNň\\k9p&\L>^ľg"hH`=u@H.\6qpUK2_p!{{KJg)f4t($SLGQMf[ث]D,w1ȌkpufGRqtx2{e~vIʨ/_~{umڠfŶ9C "fmaqlȍW>%]VÛvt;ǹ|ן?/oj0$gOl ;\wPDPskѠޡO= bw/ 'ŷ=+8"qc("|0HEڠAQw@ Bv/M7C]+vymP:M#i8:U2LjJ!hTWX0LWYر 9h x運ȉB|4P⒣\rB(k4DD=>y.zz* '΃jCS5Hk6[UgƏu\ wm2؜$#sfr!e8:u3ܺ=.`E=z~j6b[3*eynz7!4PN{v㶶߲+ M2-'dڮIr5[g6WBYD+|6ן&,ueT"Ujİ9… uP`HgRKݜ W~^_gVa>_B K&T|5tWK-N`7 285>DxЖ eV1'r# $u.[8~=sAkhm0qo4 G|%z{|P&Fol\2Ky3_&sdYM7g7T tL$g" ,Ak$gJ:\TuJMV3~r' 09,5v%4n>j_cRO[3TAM:<lwǙ8c̎1Oǘkuǡ[Č17fY_1O4|_1m!CW UA=H=YHۖqF_wژNʝ[swqq1-2 ߻"B}+:Ew,Ch` džys{Z۬3ۆGR6ڲ@Ÿ3 ۄ#ڨɌLˁqt.L;eҊ΃(N:Xծ\nų (/EM1o!I+HIYQ) x攦4sU$\[kaSJt4s sX6˱T,#{ByeYׯY 9.DZ+SsO/|^LD P:$J*]rΐؕ,V.B_\JHXS%uEbsh1FŒH#$_. F$ kXj녥(!.'F(ɂ.F bFG~ia< N+NZ^(@ɸVA`hH{Vjʼ~ A:N9NӤp6ob 15Gq´T(ML(*$!XɥY6 2(F\Im<2grx )1m4bdY .@V-Gh:•ǬHR) `"2MJkL:pxv,dx 3qz)xUg80ԹO{syM{R;,lE o NUZ_{YM&8A˻˿+GEOr>yfA@Xn+ p,&I!IebZTAyW -k:D)IJ/*@PO1.5ϴ҄gņNKs H̴uJ;'ކ}k13b{`6+0";A-<.mZI֔APLJKSR AGJC'zKy\+NfQJ%{-Ǥt|ppΘP!-ЉC/ fW,6;{iPMPo00)C{#F/фCNNhi )< M.R$ yݢ)|RY64RΪh)U:"D+}&OIQQ*qC#J g?oY/sxlKW+]fbz]Y',#VԔ ޵u#ٿ"s:*3Ll~ (&e٫bvn,Q;|n]ުSU+A4W}]GR%nȅn-uA RE6Uaׁj'(8?gA.k!|ћU*Y߾hCe?<_uGkuZF <2scl[ +CuS1|Wh K*McS@p؈js㧦U[ ↝^jVrx9CT>Z)fwuw k5~}~/TT>QBN1]ЃBH6[qZC4@>[5$i*2FZ/HHҺУUn=;em@f#4sV,X83BU.ꇝ 5VޱMz2hNX䃃|]pR {/ͰŲԕb fS)޸Mr;á:݀MF*ԸCIQRJW^7˜]+4/jɨvc3ȹ ^A,AV hj*H8K롙 9M6SRKbF@ަ$]4jUjlr>J1.vnj/:!(^_BV Ey٫/WY-VzJv_{(]|<˄5KjBLi]^Dk'M,=d/px' S)E_z:%+[{nT!{(7+q9߳30hL t߉y%+^ލ/O~Ԥؽ-<ݛr;/lN2jy1e^KЭZ)w%w;%xP-gw,AS_ ob:~7ϤoHQKiӳӞ;\:2:ʴvIpQRG4SjjB`f|Z ܋:bN_jOP w  A;5jՓN't CU)L-ņL7#arŀ(Yf-V:vw18Kً='L&tVzW)v|4MA -jk7ĶԠ Jj$Vw!R\qvo.͈#]}x/|4o;/_sc,L>F NLnR%ɘPt)d{^yれBw<1MZ8eD kVcD}X-H?AtdfCAXGq 2Vs8h>]&Hj̡ ~; ˉY kx.g n0Еqk 9'V\GWwwx oқ`&:us{7.pf/+?u 0d{Llt4Fzǰl×8yOϝv]?<4_g0tUsZglIIwA(`+6Dћhk =J ?>}d3aJ)}Z<^~:a;7rt=jL=D*0&6%ŏj%l'3d[Hu}i+Chsf\p_Vwfi1^ j#ydf`?_ڵhLkgz_k-VƚS\G߽;=M7R!VFs& TPRVczJIwm&̣s›1{1al|VmR5*ג8c0aG[83%r?bmn6CT14˸=Ɠ&egKxT@ȿ-o1!JUz'rl%Wj^eu*L%(P5B70W1F숪jh]5k,p(RIͶi#@'[3 m(fÚlݜ&k;f4:LJe4GZGwHc"g<^Mdm SieԅAJa vQt㗱h 1H@Kfa' Yl3D5msKDn9ڟFxZk5<0۩#k)!;x{V ޹)K+#o' u&3e`4\AV+b" Lci`ǵE% n<`*'V9|8͌CFGy(e*X|5J\L2!/N -õ[kd,qIYlkڼ3ۯ3g1N-*м wuTK@ %cLбe#(8li쌥8k^@ e^SJ>X@,r3t$Rj C@6 4vF"f#qja;F\*+ĵࠤ`gc~@fc~U:2OFXBԤBb8K0)p0GR1/W`:dHl ΎT HTW 3]M35 LcgT9Zk59-nPҺ3k/F/G1c)!9yp3 RR6uP;2PB\ͰāE=8+ƪ8\ ^czD>Rw-钀Z$<| fq:pf0L!M6f%DAjKچLu1 G"]DNBBEydPiDiy]EA>')T0#ӥŁ1ߣ/3cˠbvx$ ۑ" #+f9ɂN)X?3 }^i"Qr *d˴w<}.orlKN'XJ2J!ʨxc6FC|vi96߃ z1U@@"Q)BwAK8xCE7 vo1vpvj&@٘y@"L >@ʭn2irƐXZpZ˽Pqx 0@dLʀGf̱{l,L ɚjDi5eP?zPZ ;M"\-By8JD&wjgL7H!cK9QST޹q$ k"@\,S"L Iޗd!iVCYS]]]j ޚ5'_L-r\!g; 4YLDf)ͫNOTΪr[o{Cae(,_V0p 3 `txc5÷I'9*` 7t.YB`fa-<i+KNaS2ߛvv6L0 XJW-d0=06:m@ہaw tkHZ\ 8,h9@l=r6 OC{0߼W7 |p vA <0o*Rc~yp-ls AJR(L0SA(yTT"'p'xՀ!w`X56IU|o-W*ϕ+4L.:@3j yֳl} :1GȌ "eѳ$VF#72SH 1VxpkO>9. ˃LT LGPՔq\,ܳFr]r5W%D ` TfrG ׁi ,,^X0e%3:>5@2bi 7L,C y^BO!Pb.`n8]KA0i0YXӦW 9$VŪRs*\aHi#s ͕цuD ]c1W(8"QJIDFseꘪX`JGcP\\J5Wh\#2W(=;(;;m"A)!s͕cn\mgAD5ڼ*7 :{6xtw8-v:D#Ԁ3-Ϙ_zv0̻K33ܞ)s B (V~m{MF5;+݊{4VЭ4J(z4fՃV&T/k&sä4d-s~5ajIJ_~׳p<]fp5}hW `Ȧ4*G8e i6oV G6Ic&[VVy$ayH O4)\Xc*)`1C=dOoس.JyLЍOpfdvy=?F5PN7fz3 ؘ:훟#Cx0)ԗ5q6&ǖj˄:8'}2M(4Jk.$QBud}YsMN2"m 7͈@)l1tg6OZ۹95ULw[w-R6mD1M(˯^yo a^,YYX*%Utm A4;d R 0IJ,H8\_6iz$IKm)R}7hA"Gp~1BW]%}h8_Mb"Fy!_mV5aa39;= A.&s\I ;ςE:괵GG#l\Yˢrug1VYܕuxgsPKΦ޾n'V~z<iCoQK6q2,߽`a~lO SS+ i=qvӵ! [rp FTS 6G Jc3EH4KZ`Ş,?>#kۢ0Fcs/]b<:[m#6\+b@umND3ADa+XIJeXH#iF$kme] %m}RP”LRjfCե`@94ЩಁL@w z^Fw>\rфH{^J oX:H qUGAPcEqNY,un_Yg6sm9\ t*m~p<+zZfeT|jUwwSfnc17ڕ;`Ux[8>:tߙ4,m˗fN~7wcZ1&S߮&-ȵ?hWI%yE)x=nqK x|0.֗Zl/uAƐ>E-{Rt|kkI;l~f0N?fWY*A`:;qIpćp6߽:C2 ݊=Xj=L`3.8b'XtfrGOfs.O7nuAd,P*J ¬+'?-͟#鷅*V%_0I#1>_vhh77?k6%f/Z7n 8Ժu(d?oP2Dx dSSnzs@aA@~i[M Gjp~)miW s)~\ŁyC,X6-T|/tzܺӳi8nVύWE[p9gs0 Z 6XE+ 1&X On<>>0hVyq;œtWՖ4=v9gI9ⱂ \ZiR2EU. 81БJs/+ͳ2z&ӢtV}wBŝ=;MlOYoFEmH$6 Em͵3u;ƗcW=kMh J? {RC06^Zd4NG`׾9yg4+pa: ,DmζxC쑠u~ϭ='wc[_s(܁頗C=Y 1 /XMζGd 4t\*.FKyЅ[^>?c rtQ2@z[+ˠ=٧2:"?k%`pӸ">{wgxiA[Yبq[T;p~+t[+v4|yNaʞ ?]{q.3rc?Opmj~s'(>RTσ7L(ŤaQz(%70(OEW'r "TTqc)nZR {s,La -ӻg@k'5Dϴ0WQq 6DJ'̝Z09#^a4@ O C_9߫e4֜k \ 0=%Ga ZU%nd\PHm{ge ֗f<m%[֖'O{sUYf; h8;h}2qB13[,[]9UK\JM6roX&R:Yt! pͦ46Vl+{*/ʛJeɺ-voq2kX޴Vl=Ym`iSKj664W!gDI &.p a!3KTB7qv6|,04b{ӏGZD"j؇pnVj,*QɩbOH eeFOLU mGTO#Fa0(ẓ=#ۖODd/HfWN`s叽R$N<cHf49?G؎ip5f_5&>EaJחu %(\VmHe/ݑŀq6؋/Xa_%(RKRREV/9"Mšds꩞BXYL̬)8İ^ n-leN0϶Ogbz؍zٸ;n{<޴e3Tڕ^;TKί .0K,M֛N ,Rg2D!"k/%%T -XZYh67gYUEs`Z10w]C\`R-nYTY7<ྜ4g^V;k̊xYg mЫ#+VVgV'`c԰^mJ7 U.mܳl/dK޶ t%<([.Y#lv#XZq8~iڄTj/`FPČa^X},xt$c~k[3N4LVdi[$ /cv[_1eۤKo>mI6WvҒNBN2}37TxP>@P8q8tp nS LjZY$رhu{g8+RkF"jY ϑ.1EkB9+Bʽ2:+^sR&iRgʷ},p $lZ-zY`R/4;f.Tࣟpi/aD+v7 K3?OZ~`igI)<$L@/* YKpgS~$μ W?ՙA $(VSYBnCg7G/IO˟fIݢ6n\wӏ_D1 f 5xsGFs4 UXN0C@Q75 @@@+k ۠tE%#L3 0I宻y+;Ww?fTz͔FzdsFZ j Q"nYo(N]`eGҾNٓe~Sjӳu3z7ΗnrUsmS>r7n)  j,49B@'!hThS.@5Fጒ1VYE 65RՀmT ԀREvӚ85x-x4Nbկ}t}L1eo+rdyȒPFw{rSsYAz !.` &cm?_j/9v֖R2|%gS4arcvfL0U.l=[xSed U c &T逰 Y^ wE%KPյMj11jrE E / PD_$Wv~盖~6׃à2lji{5y_Lvo(^st6/ HmJ3: +t0HhQ.~Zޏ>ӶB䌂.dRbJ R2sg,gX)dVqS6z*Xd+"Rzo<3ܮ dI2&Rgخ]㽙̍ℹuh1湏3hNF}<*2*(IfXBL 8`A`w&+o+[oAw'~*Qfy`hc9}99R|ø'#ZT!ѿO# `;K*"+.ғUjDrs##6GO8H8 T;05 c\Pan!k=!Z+c\!R'Hʶ9%t# qy9g,Xm=_ 11U1A}x~X,YdRKlB̆ѩ.c-delWeoW!4v19TP3klDFpbs"m흈lwP>Xqg-Zm\*ɨf0{I2^YJ[ R@øa=G]t2G1c~vCvղvxz7Nՙ54+!e5,j+O})dMl" B섺b&9:I\q2]1 u+fRv$RROAɕe.IT=T6J!^ ƙjG/g-(w& )`2έ8:L#9"r)Fc98#/xʿꃅyVjþ*\\΋dPXNS6?#ot9;N0G%TP0d$§%i5z0Rʥ̽`Iɱ!#Atw^? HQ5x;m~266bNj P 'eCtB7V/!P7]1s{ׯXS=>S퓟mGÛbh~X6 |vQ7;' Mgxtx_m-l#/K~v%&y~U}k3s~߻e^y3suμΙ9:g^u.RRuμΙ9:g^y3suEɴT,:g^y3su|e9knuzy3suμΙ9:g^y3suμY2suμΙ9:g^뜓MI6۩d8xѯnxs3|l~-TaPU:+E{i])Dy[ qɬQ)QtZ4 > z `*)a$27U*x=O-hRYUA0o]!UBYJђiTje h?}C*r.OcpV]l0^Ttie$#VU._x6ZVĈxT2!奿Kݪ nouZ+7a|Z2b4OLRUjV$كݴ < <{J(- e08E]DXZmvùV[:zj,]V@?8 $i+4%>?;Gh?LBx}(/򿳏;MSQcZq U?qynuFDIΧfݨ9##)n^spJrQ_3w*){#sK0Ç .ƭA~=$Vf]Bl|,KoX~"zUQtbCWعIIPAa,N^t2񅃟 k@1Wv%j$U=Pm6kVp$Tz`%l+0pL,r!ufJ`xܩR#m/‚vv(vGn}vFG,st֔ !uS!N')Z(dBCH-L,=s9zA:|Ƙcdto#^r}@JF/tJ4anhZ|PЛ\wJEt/~lΒ]hRv:xl 㚲%ba*ץb͹ VWP,Cp,RC\!ܲZ[Nx5>x0+{ [:p(]"ec*l +GUyWXXDY8[_8p1qY,»lnb>~%cJKwrd>.oA* ݧ{aYWM KOo}Ͼo~;yؒsXݤްibztmI&;,$iŵdݬo=RB=ڻ/[D?'3h{>p 1L7:Z㾤FM eg?&""7!] 8jpoVT3w~U&IWWcUΏ/njaTyyAC"qD}3gF"r9ASwifpko.gsrFճ4c89P& -,$U<\Œ,}HS:%¨I filJ=OL!(>JWWۈx~3KQE}O\PI~,`Y-_\򳹛_'#+0 @f~_HTUx6{RpAK {e~vI }W_Cv 'ș/!Ŋ5 W3lm8"78r$0<$zd<8 %} RVRI\6@W]f|]d9s{ډUFrJ , Xn$!.@0֊hN,gz3x%749kN~4ީ4NHerE,pTm$^ IR1J3cĜE N5h4y@F{n2wEewl0y}ZUVzw''IszM:q .tD9 "RNs6;˜ݫw i:W@%- ii%uۻx<(7?|;NIʭ "*Shgr`KEq:諽Ȼ:"画"h<Ĩ@[GY&Mu32e-b cB+m*:ū x$ )[X1c|@&Z'Rˎ/^;NKl>f:QӇ³%f4LL_3_ldfG͠˗+"aR/1",DDꥦ0+R 1'l5 iW:+8[ QhC 悪Myn;,|A߼?(v!76}<}ίGW0+gY7=ߜ%`"{&CD)Қ)RR(Ŝ(aH|| 4[z.vZbyDɚU`ˤF ÷A7\|̗9sg3sM9'f))fY|Vw8^?S (4 P7??߅WYK6Zd}Ԍأ3Fin(y!ƹ+"bu.M M gQ-7#H˽95ƎL5h)oxbXIPi%v,(,;60L@ㇲ G;y6n 9v3|%gS4a u{rE;3=Ŷ߻VBKHc?JY6Ify9ÛEeE'fK%\r#@d̫kVYZVf5wMAɭŶɥkBw4+3R.M_1$D9"cѷK[5^iUR6o/xm KM!KQea6 Y6u^XhkB+6 LE[fdȬG"Hp:wx_-ss+:Nu'&k}b$&XZ?e&[+"OKGCJ9[EĖPƽuH ÌМ$p(daNK 9zjv'g"2~m!+iW꾜ΝF/,j|'qS+F)-LOT/I_}l%_N_(~w_M_|O`f?EX!|p;>4`k_xv7 w~wh%;hkp(J0(F&dDR׭g'W%xW0ApZg8j5-2 5*jV{4z؄!I]գͯRa(=Zt]opeB'<'Tg1GLSGFQeYNy (W6!`BL®1jveၡ@^jFu+ e!H))pgF:aHS8N["8ǨGHf؄P(|GhB-XőpP\-0g40", fJE"Jo"XN:QWa䳇yxLk dԂr Db hDk ,=PIxX t!E#CJE[Ai.wToJ}tyJ6>fį;0z sZj)3ln%9Tpwմg/k>/:xo٪xr09M aEr 0"XtdX 4],\Yө)X,ЬJ[ez9w9lVJeuʮΆNBgȞqJ['^e0<{McwA0h]gωv}n)P]"Ͻ 0ї/ A\1rF5gTAa0&zt̫N]<^]akͨ)D#$2"8Z'=U8 u Q8ݺFḥUh4BkM̨9hQ9b0XJ8E*:dvj89x#z|BPc{כ6JܠSJ,OYREDFۻx<(Ɨ ).|zc@ 13ZF`'q+B+Bu8rUe/`VrCy 52xb(q\~@$o!mn~krg AHRVa S띱Vc&ye4zl"m\Dt{_tݽDPY%*MZ*ns@)8 :(6. 0$5AE0E.(7 qfo$*JXl8Y4|\ B4USM5duV3:ٷ>>ѾVgZ{8_ x~?$l6 acVQ"Rv }fHEG%[g8S]U}hzqu>>YJfhy~Q:9m6qEoFJ wdk>?Rw7ACi_vr\yieۂsAySX+q6{8r_vgP;ھ=S#D<*!.ѐ3#`aѤSN#R,NaƆ #dD\a$X5XB$%\2G!3KT ]L s+xG[8^hIR3 ŀVb,*Q)0`R W(+h=1.k^T}oih {Xt\mj2EmXWu]Qּm 6dTH8gׄKB/WImW&0krlWKT5r!s.E Ij)IJmA.#y%BҪكjpA=zߤyPo%Jq@ @f%oNC?4BiRwϾg'ے3}`|sSk_I_$9:L~K @n^mHnw]ޮaC~O\~Z<~R:_A88NzB6] *Җ>{qЬ0Z0m>VH\g @M4B;No3x`q.CxeZiA,T<& ,9wA yE)ɜ<]FeƉޛuCvl|u[>):W8Fg+ %%Jyth 0Wm;ӐZG-g’ZXS|) j 0/Hߵ_s=MPF=>+jJDj]H4(UjBWPGrHfvT ,d:& 铱SiCJ 4J[ B\kB;1I?S1;<0x)z s2'3m]t%fhm.|xbM'>qKY5!lo}鬻Lck߸8БPrkt;mv i~mF$f.(7 ,X’;u /z:\.ޖZQߜ͟V<̍CPQoDf{9-ghFUmwo7M};7?NGO5(ۨ[V~}5ew}5DO]PT;Q2\J{HLJqAI9ş$1jAQP΄׹R}yM0Ǣ}Ih72,*eF$c#3(e!KiJ,ĢcV書UEPX)'<ǸJRD( MZANRxH|Sgl7O Jאg}.P6d n]Ϟ:M4#"$ QʼIKn*(()bLP. ˽vb3Jrijik*7OrwO3^A+-o)B9y|GM q .880ܠﬥ)VY'`u]Fzֿ5NBw*O*uuص-Oj`>hu`MDcLY]{jb5!_b$Zs!FdV"nF)KzL)J#_;j&!K#]Je@ˠj`흌"7 LGb-Fv= 'hhL.!j "Aq"":Ti1b$ly2YzAC;LNeIq8*.EcYV]J(yU=b0LK\|7TQX fI)_hgL)|v*-m;&CH7_uN BhfQZ`-aJf,IY'lSt "uje/Χ㶧?_};[tĜ6`{x]TaSj:OW6[՚3e}n{0xl`*qkܴ_5Yz抿ܼd䖰=ϔ1L> \eu!I~&!?8"쯧(%Т>ŷ&GkdwKl~AO°~34]xP|v}x1c dM愮9\MG&8㒉svxl\0Y^Nȝ׳c0s#17 n([+ȓaJKh:{Č7M!}>Bќo;ܥ4}I3mu_KciQt ܡBy4.Q;5zC|Y8G %B7OC"=!2jp+\6Ύ8-K**DT?} =3PͲ͈KT5Tr'heTܢ1WZY<17kc04Z4}!aEn$r$J V' 0p=X!(/ 81@Ls'j$ٙNF7W7*n)8h9LCHlqitT z|j;CU'|uuZĤD-)T_{=UGz ew tA[c3YfM22Mµ*2;7Zzd}?ոW b}׭kwI!Fx?)7FL:^ߗa[ޓ Cgz ]!ô#{I1mVK)cK1R>PrJ޽*xHoxsz-[v}ۭ߯ H빐KI5 &eBf>3oh}ANy{~0ywYp(%j'0.cѕnJw_&~dmÁݤ4::)ŤҒ(fXg;A ~.>,kkz₋%} rSZ kR;[f7\z oZ6<8gfV;E)4/yKUO?{FٮC@0b&ؙF15H^v6ÒdɎn)K q:\I nuu3i{xCwm6W^;mԲtny͝O-z^h~>5ͮ?9?6svs=ꆎ#{0[ zԜ=N+ZN'hm%f+ hL;<[y4H8挲LFbdذ@aKT-~-[yDdپ?6i+Q,z@I#Dfn!x-<{9&H\bUV56: ,0-4HBm@?1%c@陑x4ʥȡ]M,jxߎ4=cޕ"^u¦A1iyH ,EL,{+`z~ 4%-'N-NI[V@ 1#M gpOo]PJpSxRBKOn4qly.y*Ĺ>;ձ&y:f-дP:M*50P#"L.8μ";h CãttlB 1Cګ*8w ۺ]mh;+1γu17^|[y9PVN4ZF41xy2$ݹ<%xC?WPCA0FM&eFb`6KʜA K0#_-QeGδ>x3y#1ŞORM{58[[|ρ@^.\*03O}=Ɨ?42Hɥm4T 9:N$A{b֝ޡdRh-2)ų, )喗L9F>:(6\z%qTɵTY,\Q[A5H“3"dv.3[!Vg w eNjVQv*ď>lCͶ=[6BClN,x4>'\0~‘"HMWd‘&g2t`9}Q.pJ($]d F18aAeo,Z%!IV>&p E GJs]#jYő9vЮ]nSLjWey Rq&̞_]4 ˜s @HV^48ՅvLEA&F'+zhmvىG(گ9䊥ku$.w]1K%v"PH:iOVe0\0]L|ZɪӞR:8d\z&I2m9k#H) Ie#w巘7 %!ݫf&T6rYEӧQ{EKh= D?B ɎxB ;U#P/ Nꤷ**oQ\S6mȤR@384 dC`تȈF-@cb zR0(<)"pJL UEdZr5fƮXh*cpXx/M٧<[.)azij|' wy9z_pkGyRue  UuȒ.rN ^`S(=ʨ:nj\$-2 vʫ%fA\0.͎]QVFmۡv`xƲ>y,W(2/Q)Ly Ά%R- 3ϪaV]F$ !HH&&Iqc*}1IٌQ_F 0 "V;"`uq_""k`uAY.iN@%RzFxdA(\ cX2PRͨAD-y.8C#LH&i vjlFEi,2.B.5d'}tA$VO,W:/ν@}v٪Ct;0 #FKT.4RZبt`XJ] {YQy^ ӧXZ5%RQ?]_ހ޸)&Q <s{[N0Ϧe߯ &fG]}uI۬^v~/mz*ƥR3.QaӛNoRo@,0(u,B,uΥlZpE$8*IK՝t}??;͎}e84Q]))oFijOJ;OҰ *]^!$YKCT]cs%'o'|OS"C>(k_ ][Ӧ궓udokwG+[ S?yGߵĠ7|z6~vً]M;7xt?&G7#W zjmfcft'hxKˢ3ͧY@-<:ˮt vU2Θ9!d-Kq2ZUp*ZEuZ\X:d}=ȪD`'WE`-OF \c"%^!\!+:**<*{-x:\)^\!(-v[ {zG2]50 klZZEq*Z5G;p}j@(qJpU֧dHkIItW ċ^B> өܴ O G 2o1u9"RzOzҴx"_˙tsnJ W_-=5'4;!'sRQHY/Ŗ^e4\'I} %p(iqR.8s\GjWo?Uy\6z/y>;_oHη d:-a+ܵ+9,BAֳlP^!HFYMH 5jˉG- E2noP\eEш5F{-{ P֢9 Y<3$)ctQe`g2UxD8* F1Үl W7qcp*H"CMF٠Z:rdd2$ɺtN^ŅxNySt:MZ"bhv={Yo;@ 5-Y3Ys챟5)Κ_Y3-]Vڎ䜙}xIތ6Tt1 5ldT!&&Gne? w&y܅SR|D m,JQI@P9s.$$(FOG^9*|2kt9i|UMZp!Y֥ȲR^gmmPRY2^H3klB7%ޚ*;,m(_>H=8x=ll>7!MyB䣳d+(&){,ATbJ#s#W 844Ɗ\M,I7bh8"%*ѮlG*ؑ \3͔DPDh wsDFI+0-:ZJH!+"٣!)%-cs͓+C-@ҟ'-Of cvw򷰃BK?\ )Z/B0wъ.?m&ݗKUkEm%AE I]NX[.'k$zRqA(Y4!V3*o/N;=;ܱdgq cJaM3 $F#rTZ& uWH2i9- ԞfNZ3o;*@i[>yHhYQ6K&.p xd̽"8:Mvӓ]r7A5#SѦirfA#:.H=Ѳ` K)p蹭G0Ŧ ӋujȩW4[>&uVIhNV8G rš(0($s$d={Td5k`rF!r3:AfEB]hHiCSvtioD;Y$& 6!+iO4E%Vo5JH4*kAC6(F}s_~r*QKVN҃ČNiLJ24ĭ 5lt.#o _$-ڷ)DݽNqu˳{O~kLKݩ{tt!td::Z^S/+1I#5s(dPM9NF6wӟM]Yq?~|0 0M!jDPY "Uik*i$=Gu}&8JAw ,t%\pQJM Fd=ɳjNo]Zaλto?mzkyYꁔOAlzgPH4MfQʱgL)|rLfN<$o:}aFk ͬA}&&k̠@R+ Y-qb4RN~[*5KuA)3NFY_01D$oYLJH mA椙 k)ht*:ޓE4K5HA%Lj%g dQ҈HshR/BC%C }=#eSNģ u[<`Nx-3KADZsbo8Q_ >n}  sW*hMRvCۯwp<+^7`̆7zvmi5y|(Ŀkti1~-Kpqrk^q|ç [ќ9Nޯ?^Ls?1ڃiUήlݯ:h$ۊIaߞQ 2A>[) dX|l8"E}s :r"0D}vQ~=]"wY>BȀFY )hkc&ɤT IFTB霹VY&E;BbR'Lօv;J0BVomG[׏pjrG6)=YRzzr?oY[. ȽKM'}(@heonvEˠ!jnhtetz1vBfIOo{yɗMtyc~zN&{[]~~ch~2E=w]|)"jqO'oM_ =6pGeݏ1YaC<Mj4;RߴzRZ|e~BhGiq_V{njx{JcŽVKsfWbH .L;k:sN1بYlM0{axu8eA[$ޏWZ[ӭo^>\lӜ;vYCgr[ȑ)~ - cNHFEg1`@%`P5£Kh ֭dxsCL=Nҿ5/Jp@oZ::Ekڵwk-父 7Ӷ*PnDVNNL"ȄʑL#?yRGvw:kaҚr+[%.,GI*9#+9Uƹ%$z8mSQ$A Fε TmXm9%~rZq,h;ӿ,|qR>(&;C..y7{.뛿Єh}4qm]9&fr&KGx`d#FnZoɬh ElJchT'cg<*o2`f`]e]m9%vӆq]:Nھ>g,1$葔U^@$  'ܥ@ YUy\LȐ+A%Y!)pRzN ȨFjL5'8X>NՕzK] ^f36YМԀV"22P* e9?=&\UgXE1j%rp n%:2XN,vyj9, suYJN\\bEC])*&c|v/ʫFP4i8yo\ӯ-;HY> #Db4JhbޞId> %sCA5Mnn@lyk?Z6eG]>mI% f>r|?DU6{9r3?%3ϼe*gUq󈠡d4&ǼKTlA&:E=ӑU=WS4ԅJʺ,:שks+)_('cJ O.Ĕˉ3EVYpP9 D5V{DG6ɪ)3g<^q ɚRgjqjJH!1[B1P\o9l>}n ѹoW!^$WaM7ȃ*%e?OݎLFI;v`R;WՆӄK< ,:@m^X]XI9R1TJ .r1q Co2B|L*("0g k0k}Z1H\Pg.)NC|xT見7| oZ);@K( bzz!h>Yn6 w|,9Y٨&2emjjC1 c 6ck:ԂUUyg Ӻ¢-mqx u%)%SX j 9, rUq'SPҊH7$hG9$rBtL)E"\Iʸr!d?{Fr u߫ۀ.FT[n^RcOpHxՔ(xc4gjz.Qz8%ʆ`<&nH fced(7& J(X;k-0dsXMpj fڵ}GZCd'Cs 9:hxx~yAJtzjk x@:P#dGSFϹ `О=op@z sJBrʂcPkc(h1I'iO($/BhY7oo+Y|xk󼧵A֛w6'usMiW$)M492C$-ZjC M :΍s{49ty6 s"HOJ#ݜ85^ x|yyWϽXzӛ)mXV,Y>UR;mTֹ.sgTJ`19ц+1G{IMRIyFP2 |=!kɕJGhP(xPwBiǎFga ;!EJ?I\deNg;%sڣڄCOfȃp ozMo\+ڴLf \!N~&1O㭯T7+VFwmWmlPxm޽ʷk]롐כ\.oX[ybEZuh vh50<Ҝ=lQfl[>w7?KԞ5n\r^&)/"Tak.?8hZ|8HG~^g:wioXsػ/IO?<\q 鷿>NZ%+㤼WdIe0 71>G理见ҁ$(TW$ݧEz촜tӬ'6 ޏ>NЗ~M/~8A=EΎ-7W~h >Ǿ<([`Vx. hg9sW/Ҷt[~KoI[-]7Mi-tt^(m-]t[~KG-]%]t[~Ko-]tܙYgsOSBi +Jz 3tJ*-y]dZ[%;,/Cq[|6דEmx?DZ=gמ%>iKA-hĕm%HBڥKfføCPEOhq B!M¡q WqC* !zÄ&4n>*N>":[I׼-GTZbAF?fVgq9P,+W??;zv2z;@RF9$"]Yf=\JMN3rk=6ʬh It Bb6hT0*p 2=f]ʡr&G,WK;\^;PvձvkV!u+ !` x vH Sp(J0L.(!TTVfm`5WĐ rE,:J5 Y$$%l4FR։E"'jl=/U㓄0E>vյElqxD͉zA@Ȑ쑁B PRW[CYΘ)MH(%LD.vf&Z 6FbBIEC]C8-cW.NfOi*.v7 Vp҄lLG0fR&SDԙ@QfbV[CQLX/3 +.*C>S8d)!/o.;GաGB}(0j #h:tJ؅r[' DZed_UdQ,OϺϟA\J쇃GPX8.%&?3 |u|q{Q=[_/:2s|N > ֏[1@m_ikէޡ/e+@/u^ܱeMopPY,$X(w @ M$^k)J>zS|Se߼[ xE+r_W-!jJkf\tғ=nyf6l|ĬЫwϬXM)X) :ѯEǴbIojJ7 Z6o/d%o[x :1tnVxtu_Ӱ4Њ]ӌN,>Oډ/|uvE=lRXY!PróbQh[b4&mKTdL423m͍fxaP鎳.B@' }튾|*uqP@yU(jҾ8Oy+mT}zy團9q&* UF,&og`@ 2 9fEG&ɪ{3<,v 1ǫ/YkmR.l:x[u1KLzB)9'dr57w7֞~e/lxb)%errJ&[~wE4c{ FIkrsR(\`cp:F[v,؈ڀ46+u׸EVy8򐸎.rdɻW:ȬddZKmz|@.vMܼ:2cwgq=OtJu{O orh7ИLXZLZ@LëσW'/URT4?5@5D;AwYIYdBo6|c$y@MԂouߙmM.;!J 8@+|έ^& >Nfœu0PcL5+I. 1'TsZp ?v%jlQlIv@GA$+'#ҕzqU˶cn5Mm*ȽB)/Me".uD$'krtV@*.MB*װ|ۚ&kpӳwIxV&P};HY(M HR2L#QҳLZ.0qZȭ*͐E Hϣ*fEr%#]YϪz,`B-1OK[6x-fiI;<gcCb@vG8d`A"bmҧ5U浬Poa{CP(*R/SPdiы$"Be jUZƓ{Ag8WTʾK$reV&dѤ6ȥ=nIuSGpT1lVmMl* dx3M+m14")!jd*%eRhCAĘ%xlVoGdcWELjKdWf[=,(̀ e>Pl] [6zD9BS1cʶ U;*o6A'űF; !~ 0q2 Hh&l"U=dR_]- *B;?_ )Fr缵"ޑ7]0J+h!Ry]nxG?Y`U0LoBhTZ y&JIDTLqB;{i-瑩c<ˀJ\fLQlMr:pH(|2&qKv*T;(aeWÁK_5{gz~Owdjߪ~#8._öxr^s|[uu1O=ubkReWr&TfeZr(0@ {9rg|k[EyJ>i O6Zh~wv}`'t<1T7b2Xh9QvƢlg'MVJu."@ns2ұ䳧:BvaRa}@uAED"̗25Zqу } ! !r->G#Hs\H :eCpYr7E,BC6F&fH[Я-5O|}jNV԰F%n>m%E 3ALpRBXGrTPbFNΝ^u28&^ ~T4|Q&j=zAѠIU^D1+.CkmHE]w"l}8}HZQSMQ,Yǔ XYlV}U.Fj"~[ ./Jk9\tݔ ikC+ z{iNB+ sg?7!ĦH )!)Y|>%#/p6c7-|gg9YIpK)v"KK{h18HoH|d 8Ó9;9׊su''6O'ibOyƄ9X.0K*OąiWXurlUςilԇH\MnJk69צƞ=I]4}Az%L{h-QBro̓njW.%N}퓛–E%QꂨGѧI,c)״m|:^}г(ڲwUeOF!3Ө7+-PsY=JڞS/s1 Iμc fA{$_Ț/  X \oMߴw[#wt irM[g 1/XvK\*i$ܫ{{H{ݻU0$UNR $+cj2 Ay;ȶ9 CJPJf+/2JA׫vCjzتX[:.)5 dNj SG!9M(b"<a(-y<ǡ3?lո7 b{{ҝLOҵRbS[,Ww>Kv6k:TO2a~^iY};I]*ιh(cJ1zwoUo=F_ۗmQ׽rdqd}67ہi(8vE`esװci\K;l8a=""a%wpUXHXHiW4'ixઈ{ VF)\t{ zj!5R{7d$Vd8תߑ]} r T% HnQ ҩtr'9jk]< LysV1k-H8xfZO`=R%7j~6Դ(++U=OՏ"IEzuDO H*R%gўWN L"QE7B&c%YP3H֠ D4䛺(r܈uU%}n>a(x<9p:?jUG*dtJ.%%3fR!`fsY\!3ңB]/zR&}x<ɓ!~Iy̼3:e5(0ʹ)"3(N1&9dbRۡ[XҕE!l8]B[L$<ٍF.1G5tҾ3qv#D.Uk\}uv%"t0​i|ipL)dcc&/!@$)kn1LH Y*._iǡx!/a7ɘk5#:w5YH4-Ȋz1>֋Q<!96Ķ01V;%s}4[f!%>-اW5pV8c:ݖu>c;Z$f-q`[g@<<\RIrҼa`EL:pgwK<t t<0)z~oA_+#[}b* :&o/hbruT9rQf 6'5X}gVY~ޥ[lv_q(dhPO"E]{vc[@:/uŀۃ>}Pճ1-8?Q]} _ݖzBOZGL*.?ɋtA# S2^]D톹qaz9rwVb|՚ì=Ej3ɟ[=jz693%^Yt0q׏\hM8 6xtK7H 5Cer4L{J\"7RT9GQ)Pga() )iĺg/6-͎{xNLeNd Ґ#CJ'm2)ZT|^iy9M't0'%ǪE[ʽ4_vS*Ek_W 5jT&,M)0pJd)*:ӕ897ܣ|C,?מ n~x2{e+O̮>Mc->]wH^{`9_S'A&m^ h'&b]s΄ R|iRRϨ$ fwmtX(㺨IBcJ0ɬ$F)"t;;g.˺L&pw)iP.!mx[K4uKsTw{rE e]PQ")xLX t,@AKRkp)"\hzj{cΐĹL*A[]heΠgw)`) &+ASsSRCFQA n٤oh(ZC6JG/rXkLĜƓqfG,d)2FSt:oNꗔ޵6ٿ"Sm> .[Cǰ,Z baّ,ݲv0cDQ*bԳ5fv7IǼlh%bul#&LF%5uT0bRp6a^b.oKi2WͪK$K ZEс$uΆ߃wǷ{{+nry,O,`т2\L_J@aʓ. ͸t/}(SMQ"7"490{Q}'r~f–h甩.F_P| 9v.sR&yI dAo)zpY8=Fi3l'ۅvɄ\{`ل.ƪLc^8[<=etq<鐥tT'T1GTWo;RT4ٍ5@A7Q*cF((`p"'jZ@)(sYk! ɷgJx[3=g;Tq\u'n ׯ*BѨBҤKwICp$ƃTRdNPIRl9y;q%z{;O˫iwn6y=I=OAzb0-v7'7J;;7y2[ho GF;m 47>h"J6 5#%aw Gأ#EixJ!H  } 1+ 1I" ɨNRyn-='*͚_]߇.26{ͯc' >Ξ4@Dd#CΨ`:>;#VM]HPJ2$.W]Th%( YΘXF@R& Qg^)6"Dj 7up#:Iw"eXpC =[]/wTMXBBfN-I!JV2D "6Uw&I9s;܆Qe?< a:(@LT;I*m1V|$gQ $X/^2^H6dlT U@yUGyR,/ב:΁~V0T~/a%P+*I{UޥtYx6z>d0k!.Х |{/OmDgIuWsG,Ӏ"TXH9IB/c"TQTp x-557ו<:.X-VI v΂w`ʉ]|q>j_ W`t/TܣCX&MoYsu#Aš<^NwMIl*SnD;Y%$ &'J6A%{CU7{}3Uh=;R * Aj< tRM-N ż m@i|p&@pH V/:oJD-Ӻg⓱cԲ$RŒ|Lh :'딿kܰ-W E*~=#D{ǁ̝ S3mT&Д$na tT4Ag;i\|l]) g*BF+{IHC?OyCfgO/wo~:#Q2$!(D.:>\Ҕ~MQHy0~$FVղMhEߦ~dzj-{Ylo\K XxZ r\۪j j;ӆ J_2h04M H3-/o,lޛ2>F_xuFh{{vޟ' > ^RAI[TdPY^@&@>HhAJX hlzx{Xf!g$ٜ$yWJjK'%#D`q%%hJ QN"ͽ:a֙Mfω7lG]s,0!4I:0iqK"1qxG\+sA!YM蛢P*څZܐzD4[/Kz #L95 uLt2O"g&K*GrN{5$w^|851mYMf}y~_jW.of+Nʟ8O4o~^uLyE4ZQsijPrRvw =4<;0bcHU \lu_hƿz=NYRN)TB#7uʘNy =]-ףZ.%v16"bLH [Ϭgvq\0DzbU[ӍWK 儋NzASKx%Pb:03UΆs`Bbn&QOwmfюeU wkݸFSQtf١W&~5u{3>+~\.{^‡w=}B7Whɡͦڝ&T t׵+7;aT*ۭZn^m,Z?=lO6mܲZo=7Rtꀞ7Z>m}:}͇tR˴WNZ|tO9f7쿵i.,%旺m^SaC}cjvQtOrvL} D\Ecm7n>p|7n>pJXAY&fǛJQ/0׋ =>U|)RM0)5B >Cx9tߝR'59қ( 0ߕ(gЦ#i*1eH՛1v*H@$*C6Gh}/Sd"ķ3 Uv(+q ntS@5c@0!J{JQd.RhU =$Sw2xJzt2!GfSePg56 PE+lAU_кYB.QEnDpYƞ+ʡ`6DHҼ(W}>d@c,'$ epdc0a22DDΪ\q*zcv@~yz 6%P'`k}uH'< 1( !Re~~v[ 2C( pWU6H$4hO1n~!RgapYp@ƨ(HGKS֨P";F1-V5F樠T1S(:;XThWI,j 8[:f1X=^RZ iH2/s"JUC#ZYY@kvY9.qbڮ-, 8FY;ŬS?(]rt]à^ph0[;zG~^Z~g_$r`?;6{᏿>MZ^]&| n!tz1_?,Y}$R ׳!p87&ۅv`W>'ĸ)С}[nuEF>E't/|6%O=6 hw^o')#'|F/P˳EVij6/-?`?rW"΋r2䃵ۭONx4N\\nDe^}͸< /Gy§lE9@ȋowXJ2EJ~rwQwJL(̘'i}h/}@{4 b'et**Fk̺Ne)d*zaA: > SzPb?fاTPJ /zW6E J߯v*c\ V( g P~/k@DxͿ2d"$Nii9ScEm&fEYV4&8ɎUU㪊;ߛAi3PMQy@PnX2ye(қb`X 6ˬ R9I2U<6EF]ɹ(jA}.Z*J>yr2/lAJ0HFfl7WiWa }c,t# 不"-NnQgeE;Ѡ O痯I$,9)t*E0lJyh "dIjƖDyOwC5BVLҪ¦.YS]aNx, mITDruXn+q6#v;&桠vѱ+jccGn Q\BY28@HK %Uw.0aѦ)'TjUJD.J`+:ir5Db&|L@,Bflި;g*0 "6;"mq߈#"ђA],]vHR9Ō@c$t $hխ(msBؚ[-X'uvJ,Y4ؒVtpU49\?ȸ8_b=$_gQ+.Ƹ(G\qq݈/GVq#8>,ϔ\ JSqU2ISk`#.]P5C!_UuApc k;+oxq= W)8H`vi@v`Bb[ٸlcOZxܦ;ېp)'&O/O&WӞ v?SY&K%M! !ګzI?\GaA`BYybZH2eόYѡW"T8B Y ۝ݹiߍw/4048v7>:;7R<;M*.+リmx9~z>T+j?yK 1zi녈j`LDcy#0|V%[X XM@cb5(gR 6n0#U'txTH Y hc΢.'R5'($<y $c%ڐG'G"cl-5*֎(% 0C#Gk&J#4AC:AǛF^I:LSшYJbAĒ5[\6 QiAU9>XġMB})jmٝ.o/d#ī._9\@ϑ-$l4 CZv %`ٚ3[ECr_횻d?iKWW]Dun^,T&Ȼ1Ek$&;7ߝMyt9& , \AC{C ͸MNy;0bFJ×5Z-{%wHrTZNWe"htmQDi ARxK,*XK\Y+<_Jr)D#Ol/lXR;59,Ps`x+PLPթG*X')F:R$4(r =^Ы#0nNW&@kw iƒǪxCvϸ녶vym{˛ۤ]{.rYPu4`G2@M0㮮)Us^P*{}K2P62)ӄaւDcdCޠllz96xKP5hͲ|$c&a651Cc.s#gɴ1OKv};1^oOu.&'98F֛=+H}ӓ{$f/WAhS[ {r,(x>0(ɐ{(XMPI߄=l&oS7Պff1tىQ.,I%s<&W!D1]HœSzk8K; )e$# J9ERh 2sZZem&5WCN ݽx'’Υ5WD ЮȰEul7͍ծMq82kpVWn*b!{ld6i]gT57Qdy . fofL` o3(]&]3q ~H ZP5ޱs5ї y}Y]ѲaׇnGvBH+mQ> Sb7t -$.g[$0Ʈ(Rh;tb ¸I$iuJwWZ%|P~B/2WVU{>!c^0_NhrS<iRN.]?3gFN&Nq}X濧gӋ*!e ;>֘p'-wZiK*>oMu38j3ޣSkґRPR(o=]Ͼ+6RT h ;UjnǿMozY-‹hgrTRFDkx DhU. R.C!3ZbLb0{2M 8\vW|! I.$ Y_LFK &(IRBPQy(^xx"rnKSf={yߎ 7F[~A=Oc5l;ٓyƘB!&SQΆdk|M96F* V E1&օL, ,IJ$%zSQ1Y'ti#2Bw1&`{yu"-a}>)KQLL-s(g?PNP ;,^ph6yw*|biۻ/X> Y&_"BWjӤw} ͒էL"%^987<ݿ{Fuv~W@'ƾH?>Mm|oN_|v񚧼}Ceo3ίO,?SM }kr5 &uzpۇnOu£qٻFrSl|? .q-Cv EaYJg[-YX)[,vVSl6Wd=n&ˡjɠ#_@v_+8d KePˠ;86Zt? d4|S]z ȼArTAfY1(."*;mɞ*i$=G]*3$t',ɲP\pl0 - >&yN=>B~~hV ^Ώ)nsƍO׼7~SNȐMQX fExѕ1Dɹxm椣1b3Q oĄe$}|iKcV :aU JwsKE2۷Վbpߪ@ 89\U֝ Qu8f .87ڂI3S<TtWziiE -- W:,ِX3Z4Jnhд  3X"^#aJ< X72=&@|%A(zܭ J6L\B࿧eb4-!`L/Geo~/wNg4nѥGenwjY[M[/waFipwԲMh'Zϻ9ohOwx*ǹ@z=n+#r? ]ڢY(1\ |sKXrTq6iw1 C4*3xF<2-_Ɵǁ@bZo]<_vh[U=;$ijHFxw^mh.3ʚǰ bV`9>39h9ي须pM[& Q3Q"OrQS5hٸ:]ozӻ(Ҳ"G{/ʴƟ |&ġu`;c66Ę27s*MƼ"' fAk&Vd.Zih XL&Ћ@ fs;mHIu6GrVQZiiD0E+DM(tNs{[M&I}|G]zNKs0jx]ORkߜKs^BgTW曨o"ko?5ZzIj>[HW(#^!mq$ d4JH:3*gh9jfĽ5_|09 >Ĉ7fz$m5tzD<7̼BBs5'/8FZ.|f26D)7%l?B)ӻwE [oU|>/aOv%zkĮOS(:~'o,Mb3aZICfZW7C|:լRG ѩs -]w]gmk Ee]qh1\b{\B6w]ztp_uC/yt[:]^1zlRˆZ6;*J07y+w(iw`-Wثlۀ_y K\<^kZvœۼd#qKJ'>hKR_zls._*gv׫|yp奫`'xD"kPZT&p.v?n{fP= =yM^9M3Pkg(Ap^P* BhYP"dJgYr,R pE`pK2lWΎ$h'~7}z=cH>5Ci<6jfǿ$]k\e7:&g1&xeRuoyH1΂ z΍*8|cv \%#)U*kpl'y;buQokoVzqM17%eJ047h°zXNɕ5 9lT"(@L ? N J)͠x ʽr*} 0%9SLH>0<-UŶ$MJމ(@Dt̄0a>"n)8H wB2/:[b1V][fShm!&]Bub֋߆Ҁuv~ּ0*X!tިS9Ն]Qo&DKqأBɞ!rZR9d|kwH{(dɄ')>q3WLii%+[%39r,HR}|Ud4rDR\ 8pڦ ;U 72pURp#c=R/H- ]e,=  n 3^Sv~qGצS'ޠ b:{m]9&fr&SV.3p)69Ƚs2+b$:E!EMi,JVd3v,GM ]9p#vҊy,]M:Em_}=$e#FDD"b_="Ѡ2[ɂdY%RQ'-ˡlcUED8c*Q+}6dghf)4hȒ`9hA g;"~#tsL{դd_\q&ERqC4)qp !;2YAp|Vz:)2JGYcjұ/x(@X7q 69ޏH=f|;Z=_[eiߑP NF4iKh银ɀcw]smhe,,<&.&.eCrĘ1Z+ =F1/vy>~kc >r~ /Ck0%y-m$h#iM\YBdӷ(p6Lf3c['6R3VhNqd(> "Yĸ^K"''5XMZG`5VC&):ɓqXuUYHt'N$oWNxRhRN)&BmZa)rP< Ն DwNg*vJ|>lm|^[SS؞nJFjtyF5aΑI]p ')r|\ 9o{䩃E[?l]Łt!u ̛|}0MW?Ƥ?7|p=/8?/"y+epFzю0I#+wMYd?2Q<L~"~6և>(DOH(1HHמЮm']$/E][2-P-6thw\gm :wf7tgtq0ߩDL7Ԃ=47P'[ɠ_ꑨ-7%y~1MNۥBsVB. hxb(%e JvSkPWe<(*ody9ZmȬd,:,qN{6I/uu} }$G.IyL\Л̣\'N3A`̵S.^-.NYOfW ry^޼d^|ӉKo՗ؗ`c/~@bzk[GN;Mϛ \*(LF5I(oSPsrz{Hq,EȄr ԥL*u4\=d?_$oS.,g6q.EfnJV lCIPQU1jbZ)A]9mDcy\շk2:i !Օo-uN_&≘i2h?N `GFЇ:ιӛ: >9B\8.!8fDAAAP{m^ A1ͷ\JQtEз ;fm*r*۽Ghؠ5)FcZgRJG]mbF~ PlɐkB.i_(hvo: Cx--5|e##,+rqd_/ qDGo~9tTLn ɾϻ'9 Cws}5$QNVUæJGAC0@?Rhij?p>tyQƶmwе塯?_>6CNiēe[GMn]_OQ?L~Znrgm{glK{=QvWξuO/%lmYXf<8R4炱ŋԄAF$6s"oHk$|jpmk!& Ujnp<.vˇ;jQYWq>"AzJbE%d01$*nQ]$5j`-s&? 츘8l4t s5`MmXҤ* *-&ϋkAf'^ L)l B ^čg gO=@VV@j ):k*"M&5(lE@=QiFeFl, _.{o-ـES 4C+ ʗ>X;SAEj tF;)S|[R!,8UU0}TЉl1 GN:p pTdej7=C}*&ލ٦1Ք4U);V9@@G:%":LժQۆ>]sAF*5ժF>y !bu>xro%ĆjdA9 Gp{uh`TkGmcqzfw$MIl"|D [$qoZ7`:BfhR);lGFHMejx6Q^]:PbѲ+Ҷv#kǢl2ٗ1wh=vŻXXjL9CdOI%yO83KXt*2xyz$7Ik%YOؠŇƖ)`" BjQg:ݶjdeL;[3`|Oy4DY\[MAEkwgUECgH:ʆo{6I. E*kCh}϶0&ǬKt!=?xeKl̐>NNlIhNr1RmQU.E'{3 9YC)+]Ip-J qnU{YPHy0/=}8gysHO9Ok7?;5'ϭۤ߭SE_n.俖t?/P#_\ߴGrDVoWiger_<뛋;N:Ui򢛣> MAn]O2-զ}ށ6>%|լ{>M©c04k>{9>ٝ+_ݓKo[5j*f7__kY$OFO͏zd?"Z]dRtr&{r?4%/onoXcꡛa|/"ͯ7o=‘MUJzV|sHӯG.xJXƲrLIǬȊ0 pQ>[1/_Fߖ[[%xv ($3C6yFѷ㿮~۔ؓDŽf:oRgscbr/1 x, pLMaB㧛͹{L}!S."^k@@&OF=Q{mx}1!ϢIˆ(ܮw2o3(+3̔Cyd(r#HmEk-HiٳaÄU⾃U٧ouw3 #m:k-lww?mیjc-ӷLzd7i]K˴Vӡ[mhxiV5v<?Lf-E[ϟL~Xy1CYf;Qe'k_~ir}yhZn~ķwDGEO7Ymm"4u։5]r5];  hjPŽpRTK!D=P,XTɥl ch_Dŝ"ɝYlTSv3GftR\hJ)`m*}n:Fr Ỹ g|H/>Qm 5Ɯxu۴Rn=ږ*wߣ5Ǹum;>[2rr޿˫ϭTD抃hA>:e>Լ^}y0;foINZoEǻ9tf77wPZvҲ[J>v{mݝ/=o|\//o~9#beUxki,>bfYgw~$q-鯚Q"My͒(Xt;B}CsnZn m\1|\A^_pyv/t@]:0@9[h0jm"v!ݏ+ܱAqBE0x-KaӥbOblj`?)fUQCٻvPo53Ul/mgfe[5!%>[&&ɑ>>Sg2%U@0!.cI#%lhOUcE$8>FM{ <8P ^4<](- 0O p>^2 -;|稼£]sR纩ڈ @l!eUAL+^CbCbz;*XNG8rP[*N[%EMAj423ydUImpT)XxN{gki_aiWOML#v %$-Z&Hj*SIٰ>!MyH}zSTw6=&(5TC$Dq0b3y.gN/=;E4ӌ3}i)%۫*HKPXYIf%,ĕ}Rk 08SPǡ5hn,ަF3p!!Dfͤ-U 8F7{V"|pDOVE{WAsٸYXQ|3]LLv,IDPv];/}ꕌqYs.ZP(P)*Ѹ0%Wnʍg`!Z%LIiEIMG. LBE_Ihdh'ISpjGM5P;Be-N6 klf7<0t"j|l yvnyuE8,8ŝbb 3A/ufkgFφ< 5*|/lǴ8}NPr Ry OO=6C_Gj\P$ζ)Z\@#W)z BŖ2fNMhaQd}5zZMsFވon 4{-$O&|P|. 5M*|SnԢ ԅ04(tZ hq]t[W0NO/;__BˎMXmwkRr<;xRZcVm67kf mPr~fnwqk7YMd$p~[wƒ;t1r[~nvN8yAdQcGRk+e_\)QWQۛqRXf8'.n=t7os'NTܰ 0&E?q GNjˋ{y\.$pF&!Mb3tu ]]j[Nް{7y 7p2ɣe<͠XD)#b][(BcwRˮ#wQ[ENl$j8BM :#|-2ʜMO'oڝqsڟ^\9+QqrHz,I=3է**B0>x >pkyWb cmmsנRpM {CY:gw |ɒ(T5Smi5$j2u0s66+#e"*OCȩzr)ע+j$lm氵H6RPssDlPthd hp)DKqcl+SbPْ[o%_VnWKǘ?Oy}\jB!D0͜)y` vzG0؅p/${o}gyXm'w:T,3on t_ŔwS2ۊ~lTrqSSp$C΍,muZW6qq]EV#LIGHXjsetm G:ͬ3/yXB -55Ŧf]^)(FW$3y:zKδUw|qOҠ< 睨OpgH`k{}gHZ!2+) | gHl}{#>S{ c'ģ6 i!)'SBQnN^[%mIY G$7e@MTo#f'\LcklŖ5{ Ibl 'WnjbCK,alPcJQ#ȱzV_i2:=zBeF\\s7o2g79{dZ댽eG.;RFqK@}כNrMaxAa>-qݔ<<;;?_~So[Lpo4ϬTUj*"tyq~2nP25'no1:%A^E77E=1ioų}INrbU㸪[{k1uuߖm?E~X(9˭Y:/rzp:ieZ%>|_ߛ>zztz@+%g// 51wpz<\dmǭC)׸Qϰ;=bV:w";Tn᣿.W@!x)HS.O]R{ߓ;j&B6*g=D+40U 9P }&a,bE/S 'v5n[pf^Qoo}p_ķDًoն 6*;) -cb$4)~!bo}u55gwo ϟc2G߼S~|D?{Uފ:05L0u`\g?JXӾׁu+!u`aƧ80.U&{/U1/W#Dz%%TLSb1Subc䓙v$Cpe٭  ֤ 5Y",9P|}yG[wK1/_]Qd1 O Ǖסv.ξ|㓞MGgǧL-Вyi|ORTGTb}Rp4ds`|Ծwpȝe*QVrnLb C Z ]'wal5Cg9&m${{YN+JIʖK2&9@6=|zQ_qvX]6YBiNU6}hPPKjM#lR`8m6MyGXudBqkV$qήa[ ]~p 3g8[՟ nh>(bKF.\R-IQ !c4)^4k!02>z~o BNN_bȗOݢ=jLQ-<9OzCW͹0sTU47|[#IbNJ3Kcp\a>\'#*oa}?π7٦jF44l0Qbma ]q͛mNDԶ2:oы Q.m jnno.35 1Oa .\c)R1(E'0t=7sx [26}amtCr>9,ǧ.KՒ_w’H*杪bBt#ts&t+h0}"OHpwNMM?kr#|!{_&ȩ6*;0|G>뚥figݞvYA)l}Cx쮬?u&xsdٻ8vWĢ@- omq?E9F쵏/5xכ6^';Y˙!=(r@в80)dB7z29,ɄA{ϾԔ3j&ŽRɂe[ 1/q [괖O ZbAߖ[S8C(}.T}vE/ql|{~ 狓5iUYti*J64gmrZl:a *BTò8V 2L%ۢcK6 u1 ,8*( xO;FonۨY'*T+ϥ*cb%oxOkW&h4mcHr%V/15%d-FKՐbbZs[ox2K2HR D%j(;PIE^[VP[*g͵(GA'AE G(rYEQ-s:hguZ!lT>)Xl §X.2<#D,>!xtFo4+G_n>|Jlp"/@Wa|K7l+}cy_^Q܄OGctK63mf7geA:_ş#Wbh[N j3LQo?1ys֚c[8@>tvtMG#|"3oƥ/?^K(鹊޼h?ЂnJj >\'1!|>̯e;g8,F3䨄)9Beq-5/w O*PgcRO/I@(? n|ƅb1Z~IM_T^otc,}6Qr=]ř͈=-K#H}䦗;RT\7B1B=>- P+2d8?1]EӖGTU5}ǣ}đu -[olHg=q7_e։/oXoX95q*`c5 @Њ>JP( 5AVN&T (Ȟ-,\ξ&.hh>CZt'9K'?S0M.Al;J2o/b "ɼM3*wIMJ$_1p%/x`>FK?Oi.[+~|'r\e[ZVR8dqLV簨9%4 _+ֻ5sK!?4ZFv}xrP2ۗqV+䙾gj-͙\>^| M+4`CBJz[JT )4mSu^V OȦ}Vvjףi+{Kthcm76=}~Įg?_"˗w/ s!ڬ-<*',M|0?wỗV;1` E բb=SbPnK\QhLߊlִF.jp4V  C,J*wgK*'..~wH=]V˫ۥ_x ێ;x*IŘG`KVC rM+h˘:}/&uځJV(̺O*Pd ֏"ޣOSJܕ5}@Ұ v%M9"Q_m-< !L\:sWFN9x {] [Ib]Fwо+4b$Ag\]!H 8[;inp%^4_=O:cdc}{&ptM~m(,]u+J lhÊX *1}+2j)5gՎeK5z"EJŬ`զŃ7 Tz#c7q^G~\_ ^+vny8?)>IkieĜi_ şgsS CДW*bFl "0%d@oy &j14g/-)klm K۩U[dWE!vFn⼎bVݴcWΨjO v+UCΜBhV8&a%)!2(LZeKVau>$сY!Xp)H8rR.ETsHSX:N-_]nx(M?vDD"r 'D\gr ^j⠅$*XGTM-(Ť1R>XYX}Sյ<={#xf]8#⟟Y^u\R슋a 7i|Ľhpq" SXxIqj0U!E޺օAM\<<z7<us.󾛒dIv=I|xq;{tX9=wHz+a~H ~.ѐ3 STS/Z+ܡ_ZWtYu===r%)\p@% O6sjeX%'%Heʈ#*"xG:7kҭ"rb\DF_!kZuɺIElM]8d9|HK\_EdNF. h$8[H9Qgtwム($9%"/a~qsEXהSo*b1?f=&iL"j VCW9@+ɸ] (ykds,1h m *klvg_SRR@x ۫ڐO% 6b-:8PjWk|^TC=nIBi<Ewoc~X`1ۆG,"i[od)Y)ؖYɬ̸3y()  ]3tD@ę=Ad0`E6G,`bVXCgv%[ 贠ܛjĨFw)јs6hcH ÙEm,s &} ;zH!j~Ǡ7c[bki[Ժ4f5_=yv(3y~5t٪n>7LNnYY)B|UiQLpFw{ɹ/l!r/o>/:xʚ剗d)7EV(m^;RZeMY0jq7 \ecěe`)/: JA,LIl1-dn:1pS)SgZ 擜|m7=7wV z(kglU#>0,܄~qԋ 4.jR N=и@iUhkC= lP(d6iZl^Ipy &EZKFQڪTGRھ 8,p x˲/Pw\qd 9_iuKh JYYm*Hfo _?{\gfY( iYhK ʔ|$Ì(E!'K,kp7XuVoΜ+\{?湽#q5%ʜKon*l%WQrQ^a!i=d\MPZmx+Қ6ZSuq}~`גY߶cB7zIuCZrMg&YIUEdUmNZ:R|n]߄m&͉„UVlQpTD g,'>uU:K]*1Rnc[Ms&^tvgh!0<"ؔ&kYZJKI Fj֧g\Hȭ2AhDV‰cDQdl 뀥JlgG:+ Y JO`6xτNR/gcқ%qԥ+ Q&}jIIZʨw5(*:$121~Uz+fRJAbPxj5jqZ;+v][tDXfE`,]@T`\zcH9%}2FIun '쪽n%U.bZ%SѦb(زޣ28əkL[w{];Cبmm~^qOq P DN qd@v&$sl0R R г[@3N;ݧ$8&E Dh!ɉQ3rG![@p!Q@ୣGI7$}wϵ{ph#3b 98`6NUX/>D%xZpɃ2dRʓ$K*!tT*Ǹ9Dq 0~bsPF?DZfy^-=?- [UX^PpbJpx)% zɁ%)9wCۿԂ 5s{pUqg|_.>iK墮~׿/ͬcXRvY%Çw93he2 s)"}.[Ao6 wDSC?h i} M0u>g^o>!s{+~cOKiEڊIhYM+ΡnZIݸK'T/_n `0Jg*!WE_*%c֭Zoo 'ҞPH{ lJj DP2rML$"q\YvD҄ k4 zD YXf1eHѓJ!:I}I ᐊ1Th›O!kzWүMwc0Zfz;L8W9#:֘zR g*fL=»O9זPɻQ?^c#RI\?uڎ sm^F/vF[&enȊFVSl}MO>vyaf|=|t]plbhWL<)uR%p qsԜ-6tҘjyfivJDUSp`6srk77oGo>6*۸mcPK}gNkQKݞD~ڰ=h3lLx3)ՄdF .`? 9t,fԊ9MWJV6 , }8\8-Dn+gL4Ύa q1)(42xFlf}p}Ur]IZX$~-W}y;soД=呋A3U_-KX)3A*A`A%Rn.Xs&PO .OѸe -c8c648GjTIFQZ5A^gzu_DD>2`DjIN[~(i4}%q̽-.-J"$WҐM8lcILb3D,Ք ŠE&Z \yF@dV^J-ɗO p[R;vӷ[Է+7Mr?e8Ͽ3c|U9PTgJK3EXJ5ɂSI )3x.W =[gF<$1?7-q];q*`PsC QeF!z.g}SB9Cq^hկkT`:Z_&j6ܪ?BQP8{0?Jqަ[|v_18t2˨EFGvėFxwEol8?`Q0~Q<ٰr[odoU3=ˮNN,ւ=#i#_Y^],C[\|<z\]=Ǔw`Atֳ2?צLks6,^ϵq̣ [Ǎ+To .v>dȴuȴאwFf]cv/E3Ȁ?nKFCWΫ}O VYn$ٖ!kt!{}>h_z膶~?O7xKۼm 7%640Lc/+O.*;JmY*&lⱛy;2o_=z\.NmA'6iSr]GUG!ԯfZɦܧǛgǞ1C3vFۍngluN?<ϔnFM/x+56n.{v.vnzUL`VRa.2oK ٖXW7o/fYҀwAWsԇj zC@>NG[b8FaXo nyc+.(?I*9;a>]^-~ovX]"pF&& sT{ܕ[AO*xe|%m+A Gl$<e-Th k4bE刞\J86WZvT-bjCbQ(6·2_!C\ 9RpFjm0f>Y+\>z-r~1})?.ˇX~yV@Z7Vz{}$13_}!|NմdZD8bk&6>LK2: 63Ncl!y$++ "y/mpӖD+^i)PmNuU,!R8A"]cn.MQ;iT3XSzVc帹Ƭ&b [5 ɦWu"B;corƎpЮo;s=H')lrElA[⻞k2XBAfO9Od_}t\Y9W8Uoq)D)ֶ\h cISCfOQ~c 5ncz"_],g,:gjQvcoS^ K50)lky3S@e~y}|1&•$K֠Ą:GL撝q &W;j}|l!'%|]V|`3h|@^QL58jevs^Knm:vJsw.ouzq\3˫~ҷSy4ĢP~lxn2MQi#Мxsϙ[7+:\9u^;]b|W7Ԁx,>G8ǁТ3-UEqsCAU SKmlj@(˖4ǜzaK6:z9(tLDx[oԓ}E%l.pEp^(Rs哮 Nl˳ϟq "cY&68裒b嵴LsQ{ {)[T 0QNQ5 #ɑҒ Y?xo VAUulvNէM}SPϧ +^T8h4K4iuAie>:i(XŒ106lh) Y!P׿X,7,dH>SbLQ{yfm;ޫCn)-osdBW#u#M\M>[@ 6 rYeQ%&S Ζ}l>mA:x82t րq4ƵҥǖJ<ѤjMJt,=rH"Y$ %;1L@kVJVmn}/p]LGg@ dSeb އ:o?ԣfE9Rr(Y\wR8@ɜ#*0Y"4*z9S-NQ^S2wQJKe%A*{@Ä͓C`C7ί?./NAFɫUF!wjĮFZ@-I?-CQyIN"Օf _V4δb ǶZ`Ⱥ`m:)]#F4$5M(hdfø0,63B<ѹ %}3-J}7Wi?esjr__0峳gϯ͉̭̈q5Il+ AC)$=jbKѻ<zSλ>#䨦{+tjHb+LI8:m9/XP{8i0jӌ3}P7fa  T`%,cTjL.0pPEϭ L&}PLpLf~$rLqp.ٸ׍y_#ĥ3<6UpxMg_dPm%ٽ 2H+\i94+ & ZCwlUڷb얬<_w ōnuGvގy`яWxu.}KhOFB6*vN>6FOQU#22 G$eT<LE0 Yr'Df. fb.uDZmWRzo-v]Էy/sBq6)PLSv D kp077><:uD읟nc>JL2!QjCꝝrJ:Q$ WcSRcbjP&cȹrLނ)b1u!ݬt}9m D&VG'A6{vʫri_^IZr!a]}kL)\5:r"b.\zߞ;)YNF0!UbqUMZ()yV5^)0q6MyXbU2#׭ ~^\ $]Yz3r?φpDDշj9x'-77LX!][{Ck^$js]` |{+{K8HzEFl FSIL9TI6X0U8i1c#Ac!}K.4< b}(YآN48R^K9q`,yhx7g}h[d1Zri ^fo#}]3>UaSPKZp`̱ <>jrhrEfqԐS N2 ^ýK+iv~rw ܌6A.Z{dZ]2]Wῂs|E$zuBœlR&`o)/WS^ޭ pʈsy;,/?Yyw_2Z\>^ S&ʈSnO!Q:Wf/\_Iq𦗝3*1umiVo&(&yil$*s\ 6ϒw)(LTk0V lSyt?QD>l/A/|fUB])9:hwK{[=Rw.s0B\ӓz?#) ]>Owt<A(Ss&NLٻVn$WyJvw0``Lx#KZc,o[W%˭cv.4U]$UE)Or+z%-JR{}mA-쌵FjT< 7ݸ޻pY8uݿ$iam {td8L;imM/joثM,v6nPEv⋊8..o!틢4rs٧ضoa0ϼsWm3:#6`qljm,[ 뽮+7CFԩ^zJp.?l]#? 8KJ{8-ƚ7a($dXɣHa9hW'3~:E[ T(CNybАh9P'I q<^f D 2.'R|RXzt>|M120 wAH%)۵1ؼeZg|*8B2MZk?nZMKq+Ɨo: 5'}(1⯩C\zJOXec"%S@ Y H$N3+ dЋ8idyT%p\\s8MAĕ; A"2& ۅtH!M=Ѡ֭ =HL ʽ;QhLllP!~ Dq;߆N@^ʮmQQ*8^, BI㪂MɏC_kœcUFÐ|~뷅C|2V@G)h@ެܐhoxc 7VF Nj'}"sUށwlVSS5Ew7ƩC*"^㼘<^,"v_ဳCn᧚Za5_+Oro.%l+jpէAwzE|evT%\/#m|Cv!DEk<7"!L_\Ћ)V1`VUuNjjErl]K1"GIt6#*.D<ȵGȡ}!Jgy- V"H穭7E=-۔˗x>nűomt Rjɕ| Q Sϔ/^7oYu a~[;E͇Mi}uy%F} seC~]2.nqpšX=bmG?96ƉWC4:%G)98 [Eiv PL6Rw4wh>gRZw٧?tzpC'IԿ+26??-./pF/Ly> D\UFȕ=1Δ ] uC;!u]DJƘ+qwT櫃m rzRi:WK+z*z-bק>I-F3K-l{M}4- LsFȰl 3K>aVS'\~ ; )@b2Ԝ:\e.9p•C9?#BgW]8j [~URHҜoXw&UXr۾,0~k÷!̈́Q4_ ".$O_ů|m1̛G K& eRs*6l`T L#SiCgjϩW?p:g:vg:nxz^WC^u8]|X3Un^'q&rݗ@>]j,ei#Ȓ KLe&dܺƀ\zCg⌰D 6gvA 6gn9ul\Ll}䠬 ldZ|ӏ?˅pEmc\^~}PgF6Kr۹'XZKe+-ckJ`|pցp:4ۆPdkͶ%@b5>*l(W羫 ް\u#hAW+\vtAϓ]an)o hV?fhAwTXCߴp+_aK7?mnCc2QA4(mnM D #cɃ#VVDP7ُe6WdDS/ P擱&X$"x65Qh!K~O QHw|KBMssZY\"h4HpuR!PuJRuһnD "i? ?hE(Lk҉"w}}W^zV_qji=;BڅڅT۰;Gt?8*qeeFM-p!wbUnp5 !tRBrR~RPBd uJܪ$Hf Akd쌝؝]׮$P,cXu3˚y{:GU7 J;MFj#SNȘmJ.EM)mD!SPdO7B$,ql@Qur L*/<(jJhm&r#x;Fٍn2&橠vgq(jQk/IxfPSSHxT;-!p= &DQKTcXģQagڨlbSA@Dd[D\G6( K VF'd#QQ)lvBhG$8MgpdQF E=ZҔ+fu[Y:cg7">DjX-#Nٙ1.B=.$ %.Q@}@}&$-h>% "R$ȕ#IǡxH;Cz: >c* > :~'A$A_h>.qKyNc}gjM FĀqNMIΌR+AhOQrbh"vj5PN RȤcbhq*P#ОeW&hk1[ gv1M|7SJԇi1~ѠByΗ^Y~PN<l-ls#q3~:?P$< 5Q?j@iFQӂTnp]9ar#Q!B{`@ST@Dr %FIF #>Y!σAH :AĤB+kj8D'P3,i;ay W='י\ ;jd ^?rP}C*TJB {\Zk"(6ng2߄zZG yW$h8X CSlҜ9nH#O'ȓuGY\(CG1xg8U 1p]3ނC`4o*iĥ\֛e;[kz[jѲf?P2W)E][vCeH_^4>XTaŞ{=(puh{"`)ɐ3Y)n&k'x]T\?4M(`8NmUg d|2{)6~r&U] ]b7\?'i&Z=*hzcuYY~2kR}S䶨E=֍L?\(_Sw q59.d/~aE!EBmI_ Vv+Z;I6%ZIH4 O()rá_Idn0 poq=J5呴 {Y0C~E.:F[kcteq5uq!^7 m7,`Sh !f*xxJճ*:*#[^KaJO[Pnh_ x8>AňB,,̹Ͱ僷[w\-=] t6_,mE#HZF ^UH3X6aŻwEl㞕ŗ$^-;ݺ.6Dނr7d1r˭4 ۊl7ꏤv30&i1Ȕ4 %jQ)AJVR Sb;c9]VΡeN9 ӿ9  %(xdދ]o#GrW)DɢFH; ^sÞz|U]U*X')t*hIh«̷,?T׉ѫq3Eop]9\OŅU VTۑߍ:tX 7v}{6'}7)4?AY9,к !2:'x<;M3yi+wVt^L[)PBȤ0 r2NQ &y"r4oύkTnKMU +xQYIzB[dTԊ LIx X4JPtMrS>;2/VPm~n&16Z14]L /'5z :ǫv|r!eKDM6&ˍ #M(!xAasߋ~,߭ύ aihk-.~k~`VGG!t<[#hQhhTntlrx0bi]7^/b:vy:Z"{OxN)|bCVHb^wfM`?U!v,3Z"61DaR5c("PxsẠ̈̄S;mCOگxu_]" u%G䀰x>`B0b)J$Kca f.1RimB8*DW\qi /$%]mlq ].l9GPHv ҽ7'}m+tW5jpn'8Dj{K<ƬD BFxؠ~Td R 0g|z>)G)apYae6F@A:ZJFJSqF1-VlɆ2 ĚYS9eX8 ݸyM\Oy @Po|atxER4±&\J.Z{YI6u9̱KJ.DZB)-b4OXH2/slEՁ$j9k8 t2mucV=S,n+rO%I8#>I5I#8+ė)Ʀ(R ht PA[fy@<% ]FvO(gCVeǡ$t Fbݣg0ѬO/<~*X+X) obbjfgG=W]QJM GL+=PnCڅP%ơYWWB UPrFzkcZ5]9I)P7`KWI 4S("'u]rFՆs;arRh+yLl˻/TѼiSb2kpVWH Ql* =*m tIRb4aT3=Ify ́\&*&mjW53l5$x4Z($ 2I.**PU`=OO]Gc=l9ԳV5[ GGPp_K1"*VѧupY @#+$J;+&R>_G}˿+;R<$$ FXUlNr04*׀f!J!N}\+q5omf`Da1[ZJRo)PUDe0^L&vbfěa7 s&d*ɐHbw X"d#),j|M(* `hN%A=:T5k/љcbrVk gd|=Y&?DZw*HMTlSyo!ZsW¤m%xuv'boagΟ˞t::^YxECxZW3$.W2$z?$rCq =v̭\euߎ 4em5F (NhjzUry&l}M?;&J*k!JPtP ]EREExblL6kDI! i_[>˸o*yr||dUީ\ơF$B 7V*`f6:*PQnlg,9KkP/!@0 #IR`E M^,kq1}J4+c߫M Wߟ .PdKRZ.,s02 z9^v5Nܘ)9v `+#2N#JbMbaؽ( >[V-a[Eڥ#S2ڐW7!nuB+- Y*oh#/:E% )7:np(c]!vld?k3 GyF,xVhFgkN:[En_/;O*a_K˗h3H?5w,D͗(蔮ڄCca=?^\ կ~ wݘJI\/6I_oQ0j}EY^$uSFPs/4M>Y1*4%ݣ)́i=EHG磟֙Y}uI5;ܚbS7#EQ> gyH~hTicJ_j:_~ᶈmY"H%B\cg:ٌlNΰY[[1[0ʐH \tr+~'闥(RPHa2~6JX"=}W)GG lV%ᓖ2?7nmg\RLU>יcu/*hC|yp[GLKGUj.I%'Q[]8?\~wQeC~oZ;ǟFⰟ9rS}bۊ> m978pp,FX$ q}cm"֨14f^<Z3XrV$$g#ZsRBD lكS&F[)h$ܫƪd4Vn~EnVB8 O)}Vg+ g0Ѫ. ?5ƛ`sOqEx_ [6Dl2*&P-H҆?q`1kȂ3qza&1C&R 蒇OUU}ҍ]\M/.+;YM;]TֹLcZ$q¹;p.O-;RAx|vdIÂ&N%TR{[Fq-rT8uV5imJd3c.8)@"D!WdIbI"s>҃a*U'NO/7vq:ًgr֞=O[yԻNkO)?e>^ỽڲwmnݶt-/b>IF/i9Doݺ?mܾzo_6y=Oh{;;Ľѓf^vkdiLpS7G\9bcqšxrkr%jj ""Oh]ҢS>`GB0ZNvʳD27dAAןmy!z-7Jy#"r>$Jo !0*l@2hRk.%Y:lùĈRC880 2_ae2Fi5vv)YRn։wKڙ= >?2,!I]MC!;.'ϭ%-FzWj'qg "oJWsos?mSfYe$q7We+4!3:ɪt6ȴ[Ԡ5e͞gPD=PZh3x&21ؖ<3Tnjoqp6¦`@U0z=99NHFgY(F7`Jb$Bk0\d0TԤQj&pCN5/ T !BjA`-sXYP[PӻE?D4-sB^f}r`^yCz,nX.Oj=Ji Yҫ;ů ;0Li$+[E3yK}堪f4s3, ՓС6frVަhs|&%U[3VcgfUBWYڦ H\^d|4K/^,q5yzW:?~6XHR""r&SAˌ{\J&G8{n`Y2cGA))ڔƢQhDɶcP)3v[-vvk0{&hjP 64X!@UpN(J(0 z؅P XU}\LȐ+Q&K&&CC1b~̇udHFup5vvި]bSQ5du5@iĦ7a`-l9Y>YA@dY%RA'mRUՈ8c*VC6hEʠS$íL,i]q;5/^u \.9T/z7 $vpԄńAdzy$)[ u&Pd.mngM/E/>;Շ>#5S^5gU;`[\~|GÔNg{B{aߑPﮗ3Х|ScNkA3ͯeq;+{5]LxsPY&NB+%m9@ڍb[JrCk~&=pa#I~Fɡŏt\ M,*f诎Enl[0!8" (T#>F({yM14(r4`Ie"hZXٛ%>Dy^ȭ`'J{mZ)4 Fj!mZRd%[1LC M<7gWz$$'rOSu]w3G-^o[#,?sä.s8:SCT JqYtOHJ<%HXO3[\Ǧz6y>7#&sqKO0Vp;, PzPgdP(Zi*uݿkj*)qšK}b '6y,Db\b(iˆYQo:!'Ӥ9֛oևb?jv1@Qmnv3?R %^x1 bheȼIx[B"Z3rD/CёȒu>rT YakjBӢg3+-,vd&hx%Xj(x>{NXS7CRa~c"^=)w7k>BpQXP㑥N));/t{U*ZtK<>NAL>e~rV u['m%N ( d!4@ y ,55Hcqn)d/JARܻk.<K}T^97K\|O-v r)Vڽ0=h}b^}!*5:^}t 6|5$ar6 LN΃Kkw,my̜6cףZZ-^m6p,=T䞋D+J)r-"R;1dD! xvvuN +ΰ@GC AY)9&".LQMkGղ N2=i^rv+‰# USy^uǔ&j!!a)8:(K9UPB -VL@rE,5%T;2Dh%:Z9KVcg(C&)#)pVʫ7B1NIZ&!9;L@6P"]c lP[Mu|cfں@™=qdS\XH9,a "EK*w iqy\)matRIX\20d0 4فΊXq"&c'$c%!ߵߏބp3_ {w~SY̟V>B8zwR:>i4d?۠M/߬|{eȗoJ_LKQ2.f0Lof_/IVW,vep}gpcp\n%;v$93Lg}cHH*@|έeoXyyp}]eMCnaCg汫/W|,^gwxnDpmw7/FC^/8ъܽKvjb@ZL+=`wߎVWhFLjoDc&goGG7t߲:_̮I//Fr7W/*MZٴOaTcڈo~MxUN٫n\^gOwYFOg:T]L҄'$ZgR%1;e@ 0mxGn>Yv}<ׅ}Z`#RtQȎuޕq42MCu .1>eBEVoZx EJjA$j8S=utUuS1q毁{Rb|[yZ[Z#U>VsJu k 9]Ƌy1L-s$j댩 `\핐HRzO6L*DA;R+1Ed򁕂%'2'#"(G )4Z<&Vv+1z1"|4X =h୍c!TG`)_kQd P}>w1Pr$ASٻx`pdv&]7pe;k K0NMtoQ츋bvR*^\wpqhE}Zpk1 '˄+<;9I>[x~H ݃mǵaᵰ}O;jolS3z y&Sd.p)剸E$ar -<Dr.Nsb(o+;^Hx&i{ߖ`f|6Z Bs)2}ʹڬR z dGJ7J5a q=1*ĪR.,ݟN֧⣤vpSf_@oTa6\#'v-$ABT6VY8}433[_q0Jj`<#4yX76 (o\%Q(mm:2&E-){d"D/\<"Hym@Sq.JN+SȘ@4V.(f3Υ :$IsM槡@=Ƙ:#J>\lB 3VPuN~> wSh>(3E磜MlHlX9-t?Ci\Oࡦ:`.N*p׈gb2<]&LGpGm@g2 G1%b7nZ:9cf]3غfz}9Žt!g <6X0HIZzԕH ̥5MRoS ;&%;L觵xN`kMY*ky.e}KeJKY+,K5Fع 9;R}(cynLhhfG ch`+"$?#w[K0w܏`YSVUQu&kDK1}{u[Jlp3/c[sV"њ(H FaqIK)< !qbnh[{tdh-Vt7Y86Zp+ (53E=0fdrB ^!]ԣE:cu_=hMBӲvט5^R7Kfw{HIAB 4Ak`(v1-$ziuP:ދd] x!pad,h{^͇k?#cD@kLv}Y5qDT_({o͐CNKlN:Si*ryUH%Tƃ*N(#J\GepDQGepDQGep Taȯ- DH^ ^A$/[ɋDH^ "yA$/ 8 DH^ "yѶH^ "yA$/ buM S7ߜ.PKE]bh2NΘZ^+!ֿ4WsMV3Oް<AxKNd M14BJ9MRbETՊۮk;Ǎ4ᣱ&A ?o1%&DHFE+=ƷڵFݲAz!s+۔>q{S aWO q}xwZ+9^|'vMcc ƙܛwN:IIjܸn$KЋ4ֵa:[Xsܛ7/ZxZ Q M˨.ntUckrx?B0:oKn~w9#|]wL8KrY?wvfi@%FzėFܩRȥBFH7j)~Zo$G]L-sA-X+UM7D t Lri>ED$W,(B*ctsE 'h5ߴx mM3>iEYWO&o]ԅOy xBE"d0ɩB ' 1Ar2QKX.ۊ򎽷#îܜ$^1xsd?'.mۓ[c`iZ6L7+ V A7HFCEe-I nS}l @Oc+ ac4!: tA"ׁR%u)JWfQ&'E"yaښQ$'@ T> IE0v8pPp琸0v&*!7P]Eڦ+B+a}ouhH*\&4Ʉ{PH(+Z(aRERĜ-n?Z8Ȭ?U)̝ 'Hm5nE4yY;[ uggSsfupC݂粱_/Fc侗,"9=w'eΉb(krr甇7~= q:(1Z.ʣBV<~ cQ:Vzciʌ_Doy<_fhhnۼĞ=$ K{\{GR%Z6 Ԡ@*:sO7# 9Hڶz?78(q-Ȭ Ȓ" 5A~q}cmS>cj d"6#̃b |JRLXw&2UwAg Fv郎`4*!S9$dGua\)ە L)uʳXRoz'}5/_߿5SNr-ty Sr(a$;l+z#z^E aƤtVBg|p&#Nč^kKbfJU8޹}圂R!hAPU<)4ֲ~ܛ/ǔsb_ާl66Ju3_}k!dI !J!.N%L+e<+~Vh} Fb8"9X)mt6S燖Uٞ< Qy0,>6Aȩy39 {9b84HZƚ1"iaucHMY:C#D & >z)a;LYMklDklc-51Xkb%b| ;$MD.[(h$ykߏ(AQY^vj# ;f,,F,u= c;K N2ȨubѶ! CBښ.pDٚ^SAo@Wm 3)q[!5f1eFe-I" ()R2B:"qRsy֎W/fl_l_gg3=dygUIٸuKT)?ף)*oRՀ%N9sL1 `;1xЮ6K@:IxK_#5EFTBҞELEdɱqP ɂUh_Z#c3uvlUUm53Blnƒbw_+3>}YSο_4ͷO'ƈ\p`Зpp2mؔB d"J%ds;~aƞOSJVTEIJm'Ԕ5sakVlGpsḺvٱ/jScԦ 'i Q\9EPvtd^br5r 6z)*bR]Z a1Ѐf !hʎ&eII#cTL|kflި񝳋b,l~쉈JEĥ 'DfC8(@.hTdF(,TY8AZT蜔oÅ5+PTrZ3<2uقScdKZj^!LHFSYmj! bŃXec<3 aA mrY(kZS~ߑg fl]T:͖w2.وbrzqtAkaKw-a$N]ܝ|r5|>ymi  /$f&bBPPZ9 /2XՂd&OA·r71EH~l{>.€ylH~j9xvE/91׷ow2cZ忢HPIG.(BhtbrB5_vzJ{pDsy9'`c%YC4L֡zHAƄ(*jmSI7NNec7X( (9ũ5Z0:d|l1:;c )L2gNKׯ~L>Py權{YZbڇ5ї/Щ\w+/bJ.F+̶%oO5Ki0 NyO/KO ^pZ|})'n-en-L.dUQ:=<:{W>kA]e:k̞I|9_?]b2WGBiiv -2=8j#v ]Js ?̚lzCbI1o}kīAv75GSrH}EAn'I$%j15<@p?(lVG_ @łYVvyѮKhSdZ]kڗ[>fv CT?Qk͟kO9?R"hx x9u1:Y(›.GZ y#d(ERrhF5v|q.pi/|Sbl-妤GT BW8V;А8lc΋A/;+U>r&sWѓ+d;KF]@YzC!I$?GlaĬyK{Uݹ9IDGNUl͹ĈR:! mI2k'`VcgwTHov>x><3)WeRzm]{|<>PVW7}޸pN@_}xjAȸO (%6MU28΅ZNf zvۑ_hQhKQEgMk$IR(PȔ-ӎL#dR Ա$k,;uyiazoBl{HuRǫ,&+4yM.ir&SRK\:-!]Gr&8A׍B$ :TOgrrZB`#ʁڑ\ٯY_M:W}̀O:;ɳY~DľU5D<ք_ "zoF-mAsR],%&K*)€%JQ5\8cV"䘍2Z%r*d-MyBY,'v jGw^'\\o[i2. '\%'7$Q=!F c&/ #OJAg2bNMx\ VcPTCqzksl(ApS'~4 nӫxÆ5ad]%n]h(򼣶`S㉂aYt0,[y}} %z&fբ`zmx1VnG2oimK MD[f}NspA ]Q@"c0d{ l g Y("%Sd ҘR8k{s?jq/_|mφb RG~.U[wPW SE Xhd(xZE2nH/UeG< B^'`(r4%D0VP:dg$\"`r|nU&DSN@9FkФRMV kb";]];aᣟq Ւx&K>WՋBqj-vЈdsDW;jvyom< &ui5A6К*XJ!i]0>>TS`Ǫ{O185Q@K0":se ֏hUֻŀj012j{vsAXd/$B^ixdN$ᖐ!Xs!8)!@"K\B!zGx26Qi* Zif9[2%dm Z^3kJ?@*.?<4 ˳x}6_MZ]\)3e6*ߋ;a-9ض0[|o2yl\[..mIݶi{l=~ZhׯEEs'R_?[ċ."G0e.᧾M?tCd:쏄/K=j$z9ni|s"i$ I{ɟE8EE_+FtA0AVtsdiѶQjwgIf2b>A޽rCUh Ѷ7ukv6^6ZW^Pz<Bő!A?F)RKW0$z.Dչ' =h(ryk{ *v7 g Q G{ˡ)"ar%o!/ou&QGqvF{Yo4cRG&Nu9>ܕ|>1O#h$θQxk~4lC=Vv( ORջM8~~ޓvMÔX q݉ tie½qlÖN][Fm<vw ~,j\V$AM c&I_:jÍY2L!`!MUE =bϘ@+EAT gՙ9#x* TvFҋ g,O ILZDd "=34LNWYhՒcQڑk> R a!'*|a57 I*@5,߼Y.?Uj_t 3co7W 0YLt={*~O_JE?F/WIh1rT@4Q!9>K =KV`x}9-'?|WX8VP˳]y%>:=%#e(I@իȴYGY t5o^Q(,*R ĝ* ez* s2Gu]u͓蚗NJB?% 9=L A{m8 42!icMހDa-MDilR8ȫvy jno}Q۫,fsZ;}A7f1U0{3.blC5ΞHgWkV/W˷˫¿>_=Jb"8GP+Bέʓi%W/XUQex֗y&> К=ʷ~JSCfH7^~-{"3t (sz8(^:ԙL@DT£3.Ba|BPƓKx{}ݚ;R4xҳ(K=N/q$2ЗK_[on@p{@b쇵aKE$eGY;%R4(y 'aOOOUSUO!fA Hd6ZJIRBLW"LY&-Rh8 2r &m (p<,0-!sP2-YkQ ,pgХUy$N#O٘&1G"PHQ3Z ,HDM^S6U[F>or ѕ&\!fh-CJR=0Jh8T:-uPP+{Istt +2%nӉ bitI+E[:΁sqqUzk24HviL?ߍw1Ӡ,jMcf`x Y`Ѻ[/ryD1f,] .uOG{|mC6A=uVhJFX dK@QAe,Ezs0VS$a[=AFŰF1.KXQ, 7KHr^&7KMH&2 P+ l\:2R r5T:hP`/ [rnA3ѣQd1@]1%J B  AL.%9b2A\uB;{i-AH1{e@#7$SNNn lhڵ66}m@-DOn|ʼ}ɖ:E2]&1L.rke k2:z!tOҙ?x^O]U\[P6@KF.,P YQPGk''weIWe֩(cT};VWvMD.gd=>mvga AFcm*c.mQyEQ^j6Ucoدt]1hOj۴>G]olzw|o@Q&g#yҘeTa+tj.H2l *?}85mZ7I>ktQAXӓF2IGˎnS9tGfy Q97ija /-TKAwmrY|cF.|4Ze{Vr8M!FM5?U󳦝 *Z[;LϹp7k=ȹ}rm 氍2"d]Ǐj7>-!zFJ#V udlOE/x1g3RI P,-jc\Λ|Vݘڄ&WP G~`Z_NPiڽO%I'85[zƿ~H'rHKKR^h3._m`V3(D3s=.B_ zKH{dZO~U~{_]T-(~HSz?#=7VuxXɂ+nѭ?Z""9-6ӶRD΄S v'/{B3,hn" Eir2AM R3Bbx[8oIo΋.7;m,.x8C=_+%|(3I_)ij`Jr1$ͲW#|F1TNy[QidLj͖/y/fpw ]9v'2/t"Sh^K zODڜ,,~aZ*FJq!"<& %enL, yZF&s܍iWv7KQ]NiyGGF#.^[Fb^_|)L"2a^ ;M-9{}g`/4c`i19[ ':ȒuutESnRab# 3OU|~t;F 5g_Ws%]܌zO[T^¼\dIזրʀYr,q\8-<V"Ȭ{;mh W #s6H[<-#7vORacyҾ|;q!.NgxxOe‡/ڧlB&V‡/>Z8ֺ7ӱB.iY-5ʵ( "uIkm~O䆮K^ШSA!F_^D1o"* 2ƹIuqT.ua<="%YK [X6Hh(34@ɱTS5nO}wmiprᵬ&~_ѹw1{șGa(eE%|(-1]P89(L;tL1|IsWfl8f"JQ1J ӰkYv*:k87IPr ;brFJɁmU͝ߊPxbdKisDε2Q椘Ak+N_D @'evQ$h$.%]XZf2 2#iYEE/G NU'Lc"FGDS esI&xtj#0' Pjgjzn7c)!3 &W"M]̧CCrTIya"[V1e/ǃITujti>~WXjh4|O j8`x2a,_tLA? L堉z5T}~\e͈XQ 1\$#T]K^ [H{wCF Jl?KcZqqk:kKHX#ҨhePCs3D4q|S3qIlcR[h%oݶu)qQ9\]n%sK&/Rrhw}88ĞgL;G.p90c 1O״gT>]~ ?_/_c|#MIWK=kYb 7*~ES񞞢|҃Zja-pK9 LE yDwcķG=s$p_bԧ=$[=t/qr ^&@9r͋6ѠT@V2'\d2%U#\TpuJ"].Q^owV5={` TL^-I&ON .AB霹,ǸVґ' s<t܎i_m$ ꚮn\]O><-gtǠk(: Wz*myo~;tvy[ƛ}=x]d~ϻe-(&_c)3ZӀ /=쿹hvY3Ʊ^9 4WRڎ ;_Cԗ=7ߜXܻe^ .MWWý[f+ܾsYwC/QJޒѡŝsӐ}'u{Ghd[8]UjOuru(]T6Vݿ6 Eޑ&gC=7=2 eN"I+X9`R NEoSr|Ny%#;ڝ#,[:tW~)TAjT+KUCP%T:t e>e =5馾ʍ >36qeD >x)m1jh40kb5cL+ :misB87AA@7j|x[ӹW=t!!rqc%GpӳИ Q i0yKDp^d: :K]L45ͭם7%Z'o$,Hț4YKH$%Zek!=ˤB 'AFn&2',#="+ fh\r-YkQ=dB ^Igi'BlVz#(UԌ.:,HDM^S6U[F>or ! }ZP-h'I^%4\T*VkP+{Ik\sEP-]r:P,b0M.iEZ`\zc(m)9>?{WFd /LyD^ٞEs|FIly}#HGTIvYyDdxrq4q_c2BCe\iT8i2|&0\Q$<,ݰ-i]eag@XXhOfce2šȓ@DAѠz,"s1 yvüF'hJɿ=1!#q[2HhMl"`$gs4⩷Hy5Mi'@&C:@ wކ?l)Q3x9ٶ5O?'U(U^ *c6(),IBGalF7Yhw0#w~t|C:% h HΞR?srAݱx>w (~i.{/)B얣p-:::Ҋ u ؙi"i d45lwQO2OX-c8'$ 6صgK@Zזu/uޙn9J[Hu$He5g+(ݍ?%o|P*e lb2 bk[2P7ߤ25+fZ%|BAW<(KinB:_/D4dBSR - Wq)$ aSco=q4M/ԿB׿%e#jk:p d,652U{,=Q}֩лU2T-zU2j=a6W2F)mn-ק~[z$m͕L3CwCi]gCDDBDLXUkAǶHc; Nm9&qUR1;/S h4:R6:SBc{)=Rkk=DYPQX˒Lr)_mKqN(ą gBHpj켏ڲ-]Rx:XSݩ[즧D_mYUw˲t|Cw^ mJ\*i1>"$l4Z(| b|c{N61ykRp]Pw 2XӨɡ"$U\*jPX'ѹ߫@^~kuEY눿ܰ`΀'v)S!LJ:%U+\fdQfްV"~h: i<@lTjKQRaxhxPxpv| NHI"5Hap)F ;YD:8u**SK3s~y7.^\?vƩ_|^"\oVR 'K?W5FʴZ͗1Up~HF CXo?6sWMPҩ@:TN2D&N(|GǪޢEN~)4 T^(.. Ce]Ho/q'_q\*E޲[Vnu . .;:0%{_Yi&wvl^ޯHႡ̑rG@9LP~1WZ Dpq񽻋ԳdO(ZƸs`=s]Vy.T`ս;L~-DO*`b1:pIt\%Uf硫j4ײ^^Io\$}C3\*@m-T\M}|(wQԗKA7n[(FgB"O'UѼJFwxltwW\"-K{Q3[w?u>ԙ]U32?g mnW5sP2Ϧm B#A= \p&kRDž͔!<#B 3)O.t ^[ŷ58E -bXK zљX>l[uUʉU>!vXSs6ܜK>a魔OXzT|B%橻n>fҀ:@T6heƬJ߁a͉3ʓc5' kN %x ҇´G!v%HR ;K8e O'=^7'Sr))fS18-Hg,#s;+ZJ{i;5㟦{)q>6?Uq["_ݨT͗ة0競h .MTI@ƪbΎ=MNMMt >jwrgƃ1U/dݶG3nI:>Mji2}zw8F5gj$$IkfwMXғa4ho1בyD{z.zGB4ß?Z7(z w$gߢҶeMPk֛űl#KL_ sv4)lu* "V4KZ*5qY"Y?j@ChC]"42 讪Rg7"W:y9ލqKO햷`ՙ,mK#Y&8Z%!Zre`Za >E "mp&6!Ǜw3ÛJuMmemM#0@({~~ \Kƛi:0NwJgҧ]g%S+ tHἱ=Q"3R)uHdݻ>Z_Ww{=*+zQMj-Ti?_v&&+eKD1i-Ke6i*FJ~evVY۾X=|tֱZݭYeY}\%~X_ FR]5Sf K`a_&M o)L:f7BK~O>)b'JHB8TIћQ%-Tβ;([R=ofE1NΰN"`:)Y@Wwn.JwxK8x0OFwo/:?LGV‹e??(԰g[kmv.;:4Zԟ'KjQ3嘹QE7JY2KN tyIƧM95MxSO p6B(JfIkp¬ rVҞK@:PziMxr&K3 NvD*q⩐TY MI34ʼn+ts*NDr"iU)1TQAC^b)%ų7 hjWP[h)C Ԁ<"Ls9;pлw'ZDrNacz h(낊 g!`X (!ytڵzAZM9MAL$8[^q^ ѡp-U 吹٨U Exf:ӵp;G.JkϽẔq/\F#̖貌L;oe4Gթ&z:f-CJd6kFMRڠX`Rǩhcw[;!+7i"cګFm-H7;P钴/F19:IR _6[\QK*-#8whSیpKz1/z#kBEd ctAX$ap"pdf0) h2ȻF]BJѲvAX(%KuLe 5Ơ\+ME ~8 #1!8dkX|8tQ}c{`6-91Cm&&7 JDK:d;#7.jQsI"U>*yҫJ' ޓ' h$VK d$[ޱyff ؃d2҂y#l˟v*tw1z,₂ߪLӢ6 MaK߀>{jcx*Y Ldz6/^G^c?]͔`0(2;=7b:3Q~wmmH8" 0/3s8.egE2,9_?V˖%˭Xry#E".=+jk:꬇Utsw*y?P;E[@G0+C8wec%͢h, ?mHXdO;kZVU1C1H!١ͫ7YJlhwXwg%ی#ۍ>!o<"e6+t}}4D9DBv8 AԐFlc_[oRk2|@"Bem$,feA:G"IRb`H^ *7pҘ9-lA^ϖ[Lo:&v;نSm_Vy~(=:.+`i:š\FQrI%D3(6dPLF=l.2vgeiv>x??Yeʻ_xj+eN-@-1x@[%2D^꣣A8[R*!nKEdEm*dbt90hAjhd=* IơXB;bQ:/3y;K/l]%_ hlZq6XJEΗ2|rhtE"mJȨ缗VGp /C=$u R h0LVRvQHɔ ]a(vv#vXTP{08i`ԦG)rA Jh0_s ;,")=j0A EcpO):lkR=)aǑ1?!DCu~ٍV\Tq08;z#"5bd>l%{BE%4NZsR~@S+3SEY*DdVe#%Z-8teٍ_/2ou`\\6r]`Rr(.#.pUܰ')At)(>z T"uaȠeĈGSP<< awQai-osk#wksn(VAڻ`W ֶ[:qq3o69{$rx.M3MT+O)J&N<ם\Fr.}oVbuz}12o=NjN{ |kZ쾅hѿo9]Bz $2;br'+M!XrRx$bɜuSʐtykMC*ti>U!>+u#h}1~ CF`ja# ÿ::%-٠/Bp;7"8]ıqr'#0 ]%Mn茡FHAZ(*jVMBRvOG"'-UbQ&&~R. NVB' (YfThZИSTx2;aR`;5I~dO>^Vc%ZMny`]g094W}Kh)_qOk3nID+境c:@"&)xƫPdVƟ㇄ OB`4xqgo!*jhj'~f a^w-RW:5QmUU};_R->GπR^_%"e (kV*_;(W}`Qtb6h+}KG~Pv|zquۖ{Ԇ_/./H6(w:DO㞔>G{&Iٴ`HVwC}7?^?8<6DO͢ {Ruھ[8o1[oӕ_^bqGK9Rۉֈ GU`x~>[DV!'SSݫ Cƀ1QƲ]%B- ӊ:)G l?vE^$ oN1Ho,d/餏d, 'O^TFY/'Sg(8=ʓ'Ug6TӧO_)[b6LV^t7]n'ژo;U7{5O~{u4Z\V x l{Tn_-<\1]~028#bUesʭ;++_8[_mv v.Ʈ%RZt ?MNg:cF}2[Pr6A7?8fkfF4pBW6˗<+?k׋'kmb*pd͏4//}^<]d$)_vFGR|9(AӾ9pB`'m'L zwŪЧW,Oh=O/C}i aR*k^yiަܱhpV5Gi芫5¥v%q*y I0(2e ]`"xsm?.nۮ{ߡbН=vS* _~^KxWtkvY˙ᔅGQ+i,5(IkB@&+id<67{YUȡ`8}.s")((0ी~ Vu_K 7$F]xu=L,;|V]&lK6ZJ|yCJ[⣥ e&H -B3R:&[&N@"Y'0QŨdkRar/& XsWkXY /+ d"޵-q\_Kr@*уT%Gy5}!`OY{Rd9"mJHwvOndGhD >R]JvT#=eOh|z`L9%XrVbb|ZC/ vN݊#Xe~0P7 X.B]/ E˓y0Ghcn BG.d Hqk fCF88uF JTWtb0*a2yw#)Mٰ!%Au̡2" i8 ޥ JaW9);q'[Ҁj&Q1.LHpe'8F)G!HTP&T_D(@ 5%UH&;J V d=,R uW9k@e ²3ҕMP,Lh<4F 2!χP<t'XYt0wD`L`)B`@YLqG(v(9mt~ΎZ *joSBFLV(J892rNj LڳFxwD"=dPIRAPSPz%PuR".5TuAd"F_\\OV!!i̼TCիj !ѿ&y-i7B)-%VS! er1  ax2V" VK~A7 _nRil0ICQ؃ՋLIJBmU |qsWۓ1pƜv%cc{<]` m3Lk1^*76BOБV%I. \btLACNZL %tB\8gPԒzHiD(2ӊAU &$ejEP1hx = ޣ!Y#nuG͐2p/UTBNU~d}%*TU;+Q<&KuYA";ƿ%{},l9@y's!e>K*a+tmXQwP)mC^LZ@mD%|uj З) ӓWeHzH HjLAQ{XRr,amQs/iǂ@K^/B!K֐h Nm`1biA+E̓P<Т tdI!.h]`mBgR)" Z(M.CV@;8 jZ,*&a!$9^|LY%,1cc9c&[5m?zXI+,GhTf'-TjJ1譈 KU_`!-AܭE($`F}>|ZR0=K mn㠝{y|}\iϗsU&ŊA[. `ih^\GBvPٽ"bj[ѭ}VQ!({5 ]zfcNGϯvo9̘D`QD{=D}:#SAA , R4*DŽ,mAz |""g2{ZT}f=*-P >]/D4>2H:*Hvr;XIyψaQ 'zBE5FHmR5u#1;L ;yT,+~x.$mDRQԠj ۪wZg$nFO ITAR!A Ug&(ɘ+*-lB]HΡ~]{gӮA *DEK==xZ)EwUJ V@P8+E8P.(-l?j䋡G=M F:L`GrStA8 8%mCI F(Э_wF< \T"mM}S4q4D X9ed$ =]шw)%B3 ՠl]{qrںRf_eE? у:lm5Zx AWE~)ײ@(Nb}l!?sz=>sqة燸*.1)597f@Z()fJţt*e N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@:^'jE[rphN ZhN Ke@@; h9@b'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; Nuyh Dõmqyz8{'DNctEae'; N v@b'; N v@b'; N v@b'; N v@b'; N v@b'; N vx@2myyѸi6#YN Pz~f:rͻzwq_횫|@u_<~ys֭77HҴ^ !-far񋾗qR O>E~Y( 6pcHo.aL߻WЇ~sa>X\kNarn[;nt.o.V[gg>nY!'/O~'ݳTTcDuP @.F6h_o_8eP?o4N63~9H+>^;_Q㋭i{j+mt7_pj?tv_?U)~qڽZo5ޚgogk}<>jSʹDwNf>)3C[rAk ״ g(  " ]d$ .HvA ]d$ .HvA ]d$ .HvA ]d$ .HvA ]d$ .HvA ]d$ y<.Ϸ֣ͭ^N^_VSjnolhw\w]7xI9!1.\'Z1.8{PZ6.q铡Jk+6 ]'s""ϝ&[Ky4X`ݛc_&L1{}%~Gn!Dzk'gB8!rUzncRN3MS/0[~pi'P1{w_e{t @$7t.B]Wcw=e}#4g>!GĕZ;k_f߿666f NCރxxx@*4=V7`' ׫VtF1[ ~7A9TГ?N F<7]W>q2y0[it%+tСZۡ+re*"BW@K]R#+3'+SUͨ+BkPmIJnk ]\ӌ"͝e`uutebjgj[+Bn~<]JkWHW9~`+뛡+BgRm,ЕZ6DW!"b/[mKkW8f pUztE(5/#]k ϟpCf*!wF331Upa 7;{p0gVIw*Ni/ٿQKnm {^V5Ƹ溰t"'jɣڵB,_//m/-s`l2f{{@^}ְiyx uϚWg 9yy8L^&mYy i7_o_8eP?o6N6ޝv/{+>^;_Q\lZwn%ݻo&(sz}U6lv׿ęݫ5slͳӿmM'Smh'f]{<j8N4SYZ90By!ZR],Tָ3]]y7߻BN3aΤ t: }n: {f: mx3Cgv}>4C^Fllv>T5ztE(ܕLWCW*Bۆ ms+Bҕ]i:5CWV7w"1]!]сM]`ێ"]`NW2#+ p!" mm}1H(f:BݒhYj'NBW6ɠ^H.j\p`ͬ]Z?+B]'=Qt_zP*.6nطCW7V hs+B)y*h}n ]h[+5"Ν LWHW!8 Zg2x,N&CWu{l:nR֬=wmI]nE` 6Y,QmHd+|SKТb:"TTUo͡$|wTTi8zX^]Q)A^i}}[m~]_D2J8D+'  pwBiI#&̡hz*f㋿ X(RTcu~.{a” h呰7:B&zH:2˝fY<:p̉ذ?HR W{f]z!ʢW" ^1Sr;T u5)a`no:j{? ,vc5 h:lbIȗ7#vxѰG߼j0)/o}ugz;΂IS,G X}B{zæ)t04=]Gה{שupjެKOMPs7(L8 ?a78Fݝξ~o'4*g2xMd$f""#_Ou;iU%HF3.mc4jԢ\5 `I H,),T/g7.@zE|Y֏H`IH~#Q(/jUUQ#%M&]DߢHˊse}ם},3[M[MQ ^cWANж.OnpR+nTKs,h)5r Fhj A.sfbnm+X NjUzh;ͱ89'ѤlިHV91jƂDpdٽἄ8 Ag\[ 4Opz7z Ok2]EG7Qg=9xrMl 779![-HswS-B=(ryۭ4#rg}Kx< Hy3[q3#C|EUTjzRB{h]I\w#;YIp II!DpyubzaIB$P<'L={:֐)k<3Ol`9\KqvxِVuq+S RLkAtRS}9Qmݞ_$[үit@μm|ٜuh[pj%?>q8}s7ut&ם ֻ8uNr'V3@/ۥ?&ڟ)ՆoQK%7F2D;@\ Is%HV:Z 9m-0ܢE ]?ՓJNҋ:opЙR?Y/]q?]ѳ=m6| ?`MqЭ]pM6ۺ ݘy/aVԊ>zgV&Ua BZ\S:ҚlWv2B)V:[SYʻ+eJr/p4{$==\ɣFD8+' !&)4 *D!Q0$MLHŕiYPOB0DU`! A}e?,۳DNsNxy[i|FN-΃ܒ7U`awNd6S]:fzMjUݛX,q6!='=Mka%xHgfr'kauK5&]~+b\0F򅇊ZWԺi]5wL]ٳ״vfMZVԲIdw ge@yny;E/|2v³zGB4ݟ7=4)z3YjS-=]ygdͬCW=oBQ`|ՌR⫟WT"g HUj|Fᧇ.k[BAO %7ReY }8!^rV p3sO^B/i&ܖM Vp>u|T>-59NΆ `2zd\&ǵ);V8aR0))NEoSrْrq>s :t"..@{qa^ȅ ٺ S[R7ï-.&n$BT^HW!xɤR0u]N仈^gpv8Z,sT(-11r,؈JmЄb16.*oKᮐ@E# pw!I/U ^;&-r댭sjk1 ~㢆Nĕ<|r{w=&:CAT_%lݰu4u\jxUWc{˪Ռ`w k:g-Τ~kOLvF64ƕt-xH+ !Qc0R"jC\# $r;#.y ,2# q}Ҟ84.H1 ZF030+p!&-ُ;F~/>i9oɳN*GcŤ!ZHY(YeNFKͰv\4|4)]} Snc*g)k 3gW܀PWoz^H9Lcr㜴[džqL>Z(-%g$x>/}p|Y`qr]nN>)9 \)ICk $]`Љ Kz/K19'INZI1#Q)c(h1rɔ'hd8W݄myavNbx{3&>vي=4B۔bkڬPo0Rqn],<+"49Cg"˘S$a!{Ptݑ+B[UT6xhvi:N=[2N=K.)cة?+eښT[s /Wޣ)ʚ *E8Du)ƿڲy gAȳifyY씊+`qg%2GȺO$`Q&gyQDޣnwR8w>3)KbB sNB\0/XS6BlOɕ2kƥEmB9e/i@vU~WP~Ky2.V*cmVٯ~\/g՟ty˧oxw{ū^Uzj/x.ʆ%NY*E(TV\:@31x4K DǼ ՗ֵj59+jkS`,u1L.D:hs`ҠPg2Rm[9`odkX5[ؚdk m˶\lImڙ ӻYjnX~0E̓4|?},_bC!"(s4 A*r N+i3"pZ<kE4 t \b66Qˠe2%ɷce.Js̢m~bnM:ڮe.V`wI<q#_vf7l@c` f&e7hV-9l3(Jmf;bxɪ{ܪ I9_fg x%"z#'5>xndW+qU8qq֥fɾH[E:​SP% h:D @( h>% "!) )R2bWٱ/pϤOa-Y1*6 np~!gm46oqr5^(z[)K5F Nfm?֒ T /3PAzM'rz\0 -&{T/ kGqŽHQ<, =6^Eb2W pRn*c$ րIw`R~@V'u?]?k !8~aGBA茷 !PʽaL`K/A֛i}'tF+eM-tv[qd1v"Em[v#0l/YF ,r Eb{eS{˹zJ@@0>8xsL20kZkt5nO'W W@h 2H8pM\-qzE8 u긺U+iw]nӻΈk3Oγ gІ~+5[JÏwՍꭺ}`z\|ti4vfd)lQo7:w-$D0MsJ{;vl٩޷{Md|2V{O..*O)ԑPjR;eM=4nlh[~`]6|n;9J" hxh߳Bv4e]_IJ%~R K7Е}oMƉ~NS;;?P⿤ԡu 5xl,C&P^GԁRA]kxl<{mM`c,PECQ n~$,b )oc^ofiټ>6 Gh'4$r 4!nU#R[I0jW9]V6g7CZ::k O3)P 1!Y}7옕FKNF•1hďJz^>8Lƞƾ9TG2O ;U=c(IC EKk B-Ƴj@'ʩ<-H|Q2A AH"Eh6H?Qx4t9F #Nxň>jThtHdҹ1:OEmKڭ7sa}U$!JH4 <#:j>1Tu@Exjݶ&]rIa66l\ >TVmlfӺbb4BgKS\vK&(KK8!@x(䢥iKIDPn:^Pu2PeB ,9nQ˒W!iXA &!EEKKz[ۋUIXJ@!佷.%x(A2K i'9 +Ɉf8#i+j/gna^2%6UM[\}nlח])emWd !ӅP m$JhF'JpB2ZR@ki)-6"mU:2-YlٞP {bp2PF@bN3ksT[gy*rXSTj)dRMgFS^jhAZ&krRѸ!0XC!M+G#Sve٢j6bh&1[k{Rl<Ȃlkyּ9~9!ˇ<"MIc3T4.zpGyQ\uUR.BHۀՕ3挕6h0 8S\hi -{ۚ-igǁYp[U,bL Lzp*I96  [Fh9% q& 8#ՠ̰,!%cT\ -Yklg^ la %ϠK[ղYYL66%ipFUy=JBdqqp?h[{;"(4<zkk|-Aꁯ.PZ7_@&6h"3IM 7Y ,p4<9NJ]* W;tڡ&,/ޞ7Wv&edĨ܊HdWClB&LݾrP𗩝Ow(vs/86$ j]R2N-">ќ{9<{o~(dLNp"6! I B8iB M>ױ4 )Lv]Z5  4q0w\|2 Y]Ԯ5q u^I( s:s:W~ h\d!yϽiW>8PQ1~b+R֦Ȧ-x(Ӌ6H J 2J$F ( a`@!I n˘ףYq9֮z:Kq.&#\=~n6ؗ}WeśkV<Njl?QN]ܮKCB+7껣xvy*@w,95Նѻ}7~ ޻MC_ƓqTwn^^XԦ]4c$smՄZ۔Z #\jmE6.z)Jf-3G]sybOUEMiɐo8hД?%8lgL)FǯC'ˏ4&.'}ä.aR)^TIQ싋ҳj%ΊUGE6Nd軚%爫CϤ =ubȫrk7+/.6hw^'E]A}Ou/OGZ\Tt<[ >b|v0Qg̪Tgc A~%tr<-ߏ ax2 r_] _L8RRKD0F_v{\&Th}]<_'}9fM: Nb.HH4RMQ9ؠ(oml#mcG&ϗ MG\X쭍219NmdvLWldi v JiPa;mpA=,ѽ+Wgi:\e) pf -:BĮP`Jsؕťve)A pJ+# UVUҐ \I%=,R, \Ur0$\)I U.[c fq7pp\Eꓣ6 \eq \ei\,M•!lXy_{ks!.n{SOk G1 "N?!7?wt2_4 5qp{DIqeADdq)f)W7چ^NΫ$l/fv䷕Ǹ‰q.&G2vp`?:e㋿LS!Lz9;U ? }.pɃ0i"OwxC>g֘K9+oٷK}]\Q K}d}w>ڳlu푪ui+E۫gk5;H˰O)P-J2adEdbeo!zS-T|0NTӆR6NzșE@wlnadIkMZMp f˫hiT>7D f_l;Uz@hIbn1gFkz J=,}F:pP;[#'BkȑJڳ$Ղ@W/zJ ʀ CWW(2Z}ʁNQܨ  I++ /2ZNWd] ]q)2`UujQ ]!ZgFi:A(U+2Z}RӁN$>\jWR c\%N%YW0Uh)tV(9bWHWR%0$2dBW}oW tiʐ>m/U0\GLgw6 Q\Ԯ]PY߯sj0ԁ4VTri5Ȭt_@jdyd-3{~e\eόVޑ(&|9}3O8qr#oi)#Jѳ$݂@W/z`@*0#pBWm*llt2Cz߯` p*2ZNW4] ]1NU.*nQE CWQw(s!LI+,y9tU5UF+E*l+5 BCW.R*etQN ʀ,2\)KVC*͟]}R1U39b*ehj(% tut4@ XwbB}"ae$ʀl:)<"wSZrܹvqO{s?>p׉^nv~) 3`QNd+Y)vCFzD(`7|>v9phytjOf?]ˏ|e;Ǻ5JճiAWf=HE)2`+ BUF@W'HWTUAtV ]<CW[_z()&>ik*VXWOoVw(vY@W@uAtr BtQtut%aDW(2c5 ۡ|+7Ù*XWUF{(@WHWJ &J]!`Nʱ2\*KUF)P)ҕHIyW$U2ZDDIaȻ:I2hXn/ V}AՅgy!^rxe/rj׮}IՍ;'A'j%aKu( ҰQamf7=TmƯ~2@HEc+-ckJ2u \.!ڔ.C.py1}t?t??SÆ^3T&~KKvpwWuK8gݵDu4 ղ@W/zJg ʀu9tb Ѿ ʁN4m$zN?aGWe k;gć5`twvv=?w֧W}[ߪ,.?%5 dŅs&^ B59_/l:9^!@~uȂݕtz=~Xi*d$*O!\;(-h9v9/va)y.l W)68L!J$P5T4LDQ?XeǢl#>xl|1e@TjiJ<tsԲΡ]+_5珋U.mjB->ÇH5 q?N+x Hy_˚k7$'#Jg]g]*李例mDj|/|sv+ {!o=rdomPgy_~u3|~p~ؼ,A=|7g+|1{6aچ[ؔFvSa:ė ȕ U>BʍזsirɁsjsiS7./ g&;US]3Rs~i*Fjcpx-+yHF%`)MfpR9*ip\qYK2 %7o5[b|hOZ~|Wo`ZQUKt}GiS9z:^|Z4+7;ʵ|Q[zԶnl`xv@ūWUOuk;t`{ͧ*r$wWkI m{C+,]?Wy5MU&ͺB,Yaj%W{|_W57Jtudz*KwjG _s&[SȞuEW~mT엞@у?Lj|ޯ'WX }ޢv?O߼ߔC@3t6Zj>=Y}}D|: Ͽ. {m>^ܝm^yݦ t s-*.Fz#PuF\YGAMo_y8fWAE2-[=laҫŨwm؄L$;P.8|  Ɓ|~~Yу6:Q;LYo"ZMO~/.G7WR?RY_]ϦМӫ-`(pP|,E|rgsH<&څh 0Nej$FYD[_hwo^9ݿ/Ţ,5]C>Xl飭JocwouWq09ƥ`(&!@aa"byV.x;.=wpYV!.>vpa#tQ^ɐ$ 3&!A]*tX-XCLTõNAOhڞ`3YU-U;Xt@R &#U@tJ#a` !wRӠ둎͢`CN:c<=!xpbC||ȑyeq3ɕ Y-qV$Ot zCVÖ 0뢕/yѫnYiaZla%JT$3kDZƥ%sZ*zj-|>.;!6TrSP436ba#LcaB pZ5D(MܐȜ^ 5Ťh`aR|%JN陥93l|uc+,vXl[=L7eƦOi]] ?6{s3 C5N СSBr^! &$Q<$ʋnS# 䴳FF `#VG#}$%NтR"^2J<]Fty jg}zb]ͳbk/vOۯrJ(Lk҉|,ѾcE E^;w`k>[>_V'9g~kbhN|o0lbQ aIXL-p!gbU.P* yM̘!9GEpRPBd :%nU $ۄ$53v瀽*qagR.sMlo|xౚ_-neN&_&3gl21B۔\ **zR8#׉%3By̓,q@trLL*/<(jJm&r#xN Wb;阵ڃO Iʹ2Wpv h'5>xfRɩV{N#aRY2Zы,*GKj/pD Iѩvv8L}aㅌHo`āwNFaXRRmT-f<%rL孇x v{@g+!($i<=52gJ{I_!mLIyD^1^{]/1#RҊMjIJzݑC$(*%Yɬ8^DEF"iFzIECRm9[Ļ G.)YMJ]].RqMS&ٺ) DS6҉2" .]<{XM:=0aV}U]Ep]kK2il=vEiգk貼 ɍKM6o lr3< d 'KB0 Vz"k :3j`N *69&)I{ڞG2l g7q;__]ۧl}7VJ] _/EJ3(BLW7V L̔ !X   _vE(ש3 axbJDNZ9G0+PLsJȊpJ Mh'G!e2y>7΄Su8qBrJNS"{>f̫=|: wNg.vLH>^{x^<[k;h=ҷ`E n\඙#߹aRV9,TbpuQ{CJ.D=- @f O3M.Rgz7#ARt Rn```\Aق vY*uEϗ5u&h8}ck=24/=|MҟB6%e e"yrk0Ie畏.e3QVYW ='ڑex|4',^aċ}=hxwn)Q(dd\ d.h Cm'W(?qsFq57]x?4-18 O 2F6Z-٘EzEm/M3o}~:X.QKz~اqR-?efqQ=ٖٕ8pvVm]0ML?:}ĵs- Ɵ5:kߵ?6:k pی\ئ ޕүOP(v~L} G-ݣ4dx}VEi&Rf1 TsJm34n4lۚSn+])|R! F83y* /(vL3G̋.L .'^4nv%;V4 F?>; su]\vX( |%ͶYwlAQ#[јf,|<;WB׾qy<;a;œenc).92#fFfwnJ6A cIf1霭LBc/CȨ5:d4J^h#hb GD*r${9^KUxfe`s[,x쵌o^>L"%{M7U1zEc>o8>6Zq(?9VǺm7%y-P*[ޙD~Ƞ@;[cOr<ݟ&ګ/6! q 6R6^k{(0mB~Mu6y+:#"J cN* ҄Ә" JsNr,P4ёmغe.{Qȱ )#"tAHUxP)Df 6OSc d|CYMb,ӧR=D~\xW -܅iк`ZWL ZW@LWRWǫh~j].͒FyXdRԗoL7|c!sDMԂou:L.!JxF.KɈ`An}2Y#Zd&ɭJXw63Dmnu&JDǘǹ A8#96x"lIO/.̴-e!8K_[3Er眢[!>z/7OTbXFRLU,k,@*.uʚ1Un}kTNN Lz-YIY(MiBҐ\JZhi$Bev2iB;I[(mc eGfE4JF޻U[Ξz՗wfZc. mJ >H;<gcқ@aB8j2 5EmҧNj{NABt=e(_ 2P 뀙UBÕH%iiOB I]P.I"(S.9(Yq&K"-QD.1pKH1w8Rj2q`)lC>nPMS,B-X.K0",rӶzZRLh{WCr(ׁC{aXdC@uAblI( *R r㼵 To&hP`]􆚛CT_][vs۠#`t^/ҥ?-&_$񪙅&5@KF M(,5(h}eIWw4.=TpCCX-+\Kg~(|?dY `_|?I=~)t7#|rk_~] IiDC.A:f*pdG=+=Yv;ϯ/%ʜKkyxrQI9Y*sh2S*; 6+մ/y=jACU(BJEyL,+J݅4E,B0%ц\#V[,tb Z,N;jMI)6rW×԰7.,'~`2\|#I6s ZZٌKK*s@L4<7@XqK]aA@_919:bnwD((*A:d܋ X(OsF0 9 i"A9e]*@YN! %6$Y&$iAtd\eZm9iFz4#[.O+ٷ׺6~uJ+41__M"B"p@1FrVXd"ʔ Z  uoiX͘c3Cu@5U7tŮZ:êǮs ,J1?+KLRKdt\6s㤇p:+&uQQ?fL :eb$El& A@`T\(I:Qt( \dyR_mAJE FxN=r ,(w=X;kn{{ə) Ck2FXa}ݖ'+@Eg 4=){ɀ~<*<=U[*]p"9wʺ,cd秼KisDε2Q椘Ac+NDIk}r ~,ZNX(9Ңk܋P1om"UC(ă1Y {#m穼N3zʌR9H<@ F`N+߅R{"rKfO[q{s*ǟz4`\rnCo܌/%6)iSad8ݩ?cti~6C}zѿprN? ݏhF:igwӜ'3l0e†a*]۬WDܧ^(nkgRȇe?M:ˆOAIk9NzûvS$f!?Y-xU3n"uR8QtgtIVQ0NݔBR46 Ӻ޸'d)I7&8㒉kvrn\?п?O}>kf?f#c<2f,n^.8 nem&J+I+\Z<z2=JoE;}Zس]B<鏋?>ٻݸn%+f0i;eA0y8 "K${?[RuR[RdqZ\:קr6?Hղ:'f5[zo#}赭]>F'Ol_y39zƻ݋;1-i˟ۯy AW=6u??rZv č*q̉[:hyh%&A{ vK0坘@9Q@\'xC Oܚ8U)`&r*S5/ h"WkWHgYh~RW̖U%Ync?k4/B8^K@NW;JpztYʏKՁ;芵(.-޴?6mZtcaJJZ tVv\6oiڤm8a4wJՖ]>*L6C4msz rvmRIPf49a'Nfwp^ :!>/yij"L?3Nݶy-*PGzI+x›~ؒeiAHC-1ePc\En ܼК;2yކaR.moѸ|8=;:I[?~iҵUyk>Εo->9Lqk샧62w7c~7u.7^Ϊ5;R|+=6AO#@Qü\Fy}|Z-i&~YğGR00?|^!NM\%mfcfyƩ.KިҩOջc&_)oA'_Cbj íhOS'ͼM{ww<fGT[= n!o/rƒKZYj.Eq=??.XݨJ t!jMi]| kmFOA5}?7A#;2Z")3{%RaLiNX_mF86)fM&j-9<7J|XȠ-]^ aA[Fjcn ee*XL%e qk-fCwPc݂գ?K+5;pX0TA0H[$` (JG5lml,̡+2]hջlU&#GֵN PL1@?ktufHWL% ݑU"2S!}LlDB.i`簵_,ͼ@q> 5 {- Yq3L l64gDBPBfEiXoZU*1֡ tk):@A'ŜY' sGAA. L݆Z8()$ؙ62TWy0y+@e2iJo ۶X!Sѝ)F4GQF ={4N(l o@mR]RAVݳ. PS̓[? *+=Sk>Bi:IasA Pe 9P@eʰN:P}j>VqC/lDhD R5xl؄dX1f@TUa:A_r{]`N{{C ]XxU2Gl t56b6XU&9Q\żs[H5jeZ$B4dt:Donױ_żB,ȕ'XXiЈQ2Gށ]R%6 hC)䡻%_!806';t ((7A^,CkJr[SւhSRu ByN 9@bu5;e30be@R:h53B0 xd+)x݄`Qm4RgViْJ)2@!~Ѓ@8BCx("npi&X(ga|QPYA>uBB [ʉՁ2Zu_`WHgYtggNUjFʬf&4 Զf9~T̘ 2YԿYn%l$B  nM4U6k9.me_ W/ⶻG/>Aj|Lw L#.nn8m'GԞ_*ܘٌ[c(YPf1h\5?k15CgB9[6#7C({|hYiFYFp0)!A/Q{øSiCצ@= ϻG:&Kf~sRw1U``EHN֔rlMpyP/FVfυip Mj|d$NYFi"0 JĒ #i5V̯: @.JlnT*.Kc1˱XTI⇠H 0i2st\!JkBk:dm BI>z5 T GV/8܋[6N ز ]^vBhuכ9>%.Dz[ւ*뫼G|.fuI..;\"^?m|8Iգ_@E)i,l5|j}K˻%>yNz[KlD_8h g?:}o ; 7h ߏ/eCA_1^ܦ٘j7*e@Vf޸JЗF5E}J% [iH ޛx]G`CzMeG]ggGz+m#9_ؿ`G C]صZbD2IY_8$M %K9͞:~U]OlVO.u!sK(,cy׀+&Zd6` XPo<=^.b+@x۸g?0WH|ոa.qID;=I~$0ҩI/=v770!אnYc5(Q=!Zg6l?= bL>Ƿ"?Mԇ(o"MwfAn=MgpFTbY(IU}D|〫 FVcUJ!/TCAEZk ? r ~)}ϐwoae>'=(/$;sGv׀wJW_'|?ǣwMi~=}- o&ʏd+yub| l]dY.yV`iKhuQ^|6ړi_Cq7%\[c:uu\7q #C,Ngk9CjOm_~2 G.!&pUFNEó$o6xCw<}M/FNa{.}6Ij $]GK(&EufO}"*\D~쭼{3>Kz@k4ٝySi;)?ThiPv5O*=K8 A RlOc P Vevq (E gOu>C@)hP 5=ac]a19 wHBKdiTQ,C4":U6B,d0g"67"W^-P&(7`%չ)H1( ttߧg3_.qvpi&3~.'?|]D@u[̫EHdAyqڒxeUPP(u .@93¼S-8]N"x엇lg0hi笠Kg=7dJlݾQq4Q|H-X1 7(;+iT־ XpW~*{b֬s?^ic}mޮ|c,ZKHe8} "n{b^;&$#(?ۨ "Flm91"JjEB#hh3/UU:aVܸ .[MFoɍxeId,\[n5i%'j>gJzD!bQB*и6(6So df2WR)ĢW$-:POJ{j7v$zZ7ڽ%JoQUT[$ԅ%Fdhk @ki=!QO'?;~v/Q_vh\SV,W˝40Au()SdO hP-HHF9㙢ov2'ʺktU6T[ζ+ٮޙʶlWD B&ٔϵI@9 QW*jS xCVbbc-6N@Бlbkwg#.8oWCQȼE>km9{YW6UBx%xi(Ej%$i/"UѻT 63$@v0K,JV moz9oKtN%H12FFFG26r-2T&3~ZA44ǵ6WRڒgo2*ͨjG(}6% @*{%Cr)6 l*"4R-mӓ}]~öw#}ԴM5/MSa4r)1ZQT$s x:s[{\*:ƍ E"2mf$`*bAų#*őCQ@P:(^${fcƪ4t*epG'Èk oTj:dQtRǥA)5PY@9uM$C`%WS+RV{qz=¹vv;uwhf;/?u\*mʨR\W(h\Me' HMrL|2Pxc$. V̩5Z۵DV]HHR`zc< u 'i -T) ^:bUrIFx6^ɘ3::)M !K#UhJFX$(;EMN([ 6 0L.!j "AAlN>-O fCG &R;`YR& Os?RAhjnbz1=Se6*yp}ٰ-P|ѩw*D1͒FsrExF.sd*x[8(wL GU7_u`&gmѴ]5YeI:akX (;˹EV|XܗcMPHuSC̬EDb')9f(LJW6qnMdLy%_ڙ_J5HAE u"J-ch;TrjETlpoH2!rфH#)L7 wB5Gq (q<\ Z]s/y`V [/IwyF־\Gm;#jԽg6!iv~<wM.N߇ΛT[ ̭{]~§j|wΫ4g#'\$;lz0UqBjWRw%dm63J!3Ƈn/O:?_ø3x PvwP6jmIHC?^kwN~]gu 9_R9kwE4U'wC3N_z'kx6׋2$SIz#Q]pnLp%)La4t1f;2FQ<$ `D.:ؼ\͂'|۔~#)ay7ߌ  ND@|jEQO-S6"FDM{ m ߥqFY.C}3?usxߐs7՘j|{SpK!FuЃGU ua)'Tw.#QvAʨ/~Wc%[Lk  l Qs(ҢUFߠ/VI\hQW >[@/ǹ 3O;*,8I6'ޕNhE 0p=X!- 81@ώV'XM|#U 𵇦Ђ 9ub 5h(F-kcQ5GJ׾η溅 ]@9_p[*BvR$qyhes-yKru4 k6%ίng{,TSJ0j蔃8fJ4dYe@.kUdv3nh ȒڗqWPvΘO7YC/R>Bx ;cG*uJEmrﮫ߀i&=NROH>xjz7xa6 ٬^J>!x_katZJfBʎpk[Cß+A=2Gzi4ϫ(Q \øh%4URXԹTԊ R )xJHo}z@^}hZR+y:Oi2Nx:Һ{_Uq pJ-TN]S8Fl0 ܝxs}r-L 0q'tK5"&% :&"(-XJZrR;QO,JzjE8 Ge)REaw%Pb2 3T֖g@D6R1SXR04/`5v=&W$i([}U؝AjvS-ԃq=gc08+".m4HP$Wi[7eT5' GOvL:$ַYO#+YMui'3/|ZkiFn`e3C~/nQ0hwԫi#;zKߜ4g?oaC_iŘ,,!> Ӟs"vLY˖6%ۜc泽Jx>lu&"V4)ϫR)*+r$[]cU!ĵpDJ5WRu?|tܣcU+! %D$JEr3"3 wa ވ'(2!5YWe&ԎM7KXl"Lqo>ܷtow1Je0\B'.NzeJ!k%ѧrtY||t|SiEMml¼ Nj!Ot>I,ߨFUd| xʑUFY}bZϧ:u}( :|ީaN$ ͙bBEK ̣jEIc6s8jܤTvp>ulԱAda䔱O٘W~N^K,,>o熣 #w m  hse0)[ٝ|pP|]ğ5Њ'P ssveGa]a) ׏O!W5:90W[-̩D>w1_UGy=!i|>s)9jb4,#7Ɂ'ςDWη*%#e\. =\r@ٻƑWKvWwAmp$m.j c[${v.O5E mYagVO*VWE3:%Df<ό*խjeʅw]OwʅwҖW2nirEhT/ؑy##fxYCKuIs ,1]bG\Yv%(pSFS7Nج43d- t]383DS}uvJvEaőZ'W${!B66f>e-,2QP: ȋ‹}VǮ|;CuxBuZO%ُoL oc*@jMl 9lQ~A铿/WEi@|6`d>~T i? _C+]FA$X:` BV2)WzVw~:x_xCHvb5jJһ׮6fW>Dݖ6憆nu<ڈ7|v}Dx~_,V{L7禩ժV v |vM<4ƿSobo[ ǓE9AU.PPyoKEuaQwZwM7kUPZn`頞A9DǸ.&ۥ} BZ5zX9H, 9)huj'U΋֛RJb݂k8hdsr}Ub.?OKƣCW%) {Y*eQ3)a\[EMFb$yLDadWa9a$om"QIk З8ά>Vr-Cv*61DGK.a}P!ҝR&rnF+IV hS g:E 3樽t\Ytʃ"LJyO e<:}j$ОY.(gg} )c4 'јz$ng6_ƑzZgdyDЉ,xEò"26gѻlPx*Elc) geݟ2R8zc'ƬR_LTjGnb\QPG.؜Ԩim`}TekZGC1"kn.)=Ȏ.H(psCE;qs1zrHEtʑ&#?Lm܁wɤIH0.BЎ BpB` fCdI#FhJ sZN\ ڝ߿^l}1('"rS&:i꟮,6SUQSWz;HWH71霙&+LyqBwh%Wʵ[{bMXGM"mߦdnA.َUDz^Φ=ٸr   J]|`v/ng:Q*]^։uRZu o5Fe$mmvjhmvȶ4Jb=ܩy9DgzO:mv3%k3O)$zuSQ8+9G ҈4_?VgJ^pЀ5r-HY*)?rOlI''? [#^lc1Guˣk8-{tH/fz;fVO:m Zb7@^g3h#q"C!-}/cdFB@f;[+ql lvrym"zHު%+^#8bl~b.  !4Q >w-+t.ƨ{zD- "KhK]RwuлG= E/z S [ϒ4&jAzȑy*е^a֦6;^Ɣv9xH#`dUυ,":fC,E%rE.٪Xry~66s7>},/>Oayz_p7u[ֻ̪3ګod м{{^U 8Վ6S됞xiݿA7y|M72}P:7}]8]9tk %taY>[BQ%ŕna4^y=z׫{EEӑR]JѲ$J(,\tds)-Ҿ;0KītWyW ]]'tGyZ6Mp9۽wq)Aj$49C)\ %N H׭.e M 7Y,ցKIV>&pE*N |6[wկwT]>Ū{_1zoYT`%`0} \C^ЪmQ}cvU Vaf(tEh-HW4(eJW4h&P誠5tUPZҕL)3 "c]BW}RHWtV6nkAnrUG/N'?n8jE R31gF&gեMNYM# <7wTvZNꁲzO"7@E.!-U~Ge2Hi4YZSǵZHJ[`ibXN⦋H8Ms$KP!bj9ծSOz@tE$ \ņBW}葮!]qnjnYlڀL.|Q%}Y>(N?6"uqM~W r T҅{[nF =}vsr|q3[\ d}rJ]`f0tU4}R{+V! owEW ] J!]I[CWC+1w*(AtJ!*iDWX 1*pq0 ʾЕ&mbCr `9*p`6dw JC -O.DW3X'kժtUP5{ }=HQK}PiCxmAF::sP<-ӰXW' #㱚l3A7gi0D* QW΁$Ȃٝ-Gd^ѯRTBO&:4UAS M:zHpqctz7u}0(@;H,2pAkrRZ9C) XzGv՞-[a}e<8w2j]koɱ+~Jbs,H|~Tˌ(&)E)T1l4{zzϩJ $b7'wnw!Dnعz0-t[|$)8e*W3W)JR{e*C}`I.UܧȕHD=Q8]\G)~uS-W!F!HcN( yL0Uc.6J1p.bGބ[|q3\ U~1څq~B՞/|j%sO(}&6HNZ8)xDr.N>XEAoodKyzۨY엛6smq߶r)I)~nB 8IX *WIT"J[ IQqK M,글 yǡ^r61Ky45)t}>2fq>P+ > '_#r))\"l"IRwanJ Sk yHJcbُBW+#iR]s#/{{翔uDiBp!VJjK%2(e. cl;8)Q؈-~A[c+Ύ܆8rkd)͑rvlYl8g6-qrK/}<aboqϡ[n2[5~Kf=_/[w^)I#2.5PBRd:1FX@&$ÝĢdO;(BA1=@!k.*PR>o v_'o40P}H+yAt`Qb&T*BbP+L-OjyByO z͑+F".$LJ(pAձVp ~L5DgldS'FuB-v,L i4I M}@P8zSyB\Cm<"81Ty?_b|7j9jJR (53E=0fdC t/nգz,XKюbrvaE'$/76&U1/4ۥ[Rj CQ mɮ`~`.rɳoDi9ZV"{%MJ8QbrK10!t{Bml(Ho埫ftܛqص "Ç~<./-z!%9Yq5,\V[(ԗC+Ky%@r'-j ,؆ig :)Pǐu+P19/Hܱ ё ʼ^$#֋(5 *rk)H(S>"2u1yNcGL.+3s&-Mђ΁E6728ny-Dq-Br:zBQ!4#h4yk"s6)ՉRCؾ㝸͗d?Xa\[E"oW10cZ6L 2X)ըP@#*j}>{CY*+|s& jN;Hs:PҲ$}e\~םhqpNegl۾9>ze'h[?k;p'n~3,ŀxa)_4G/ @\IV l\aEezp12$'A@-׸rBPxԢew`JUrN=7rw\{S8/K 3ְ $T|?ԛ2hyp9JqC 7J_`z18,ߥ!U^W$g }u#~\m & mlo;[&y1-ϝtx1cmk\םg^;ί+oz /j~3/wdf|;?SCgb3UѨ0ؽ4v-K1]wW?dAa海dQ5H/ixn D [Ko$A0g ڀ7pT^}yGz a12D'RȣRFu:%*EKZymy0"̷g߾&!~^xϳ2,2|Z~)ʹ-G·􅘩╆zw}8B8ZAo>DSn.9j+ID[bSY2{K WUV!hT ^J>YiEvCri0c+\iJ0}l r N0-Sdg2PF,7ZDpΛКz^j[Ww<>!2?"Cf_q]kҞxgkwXM=mx4TmuaQi:serV幹 7kqyc&ۍO*8Ur2'BOu0c"{u>thtdhR$,[fYASI81f? ĆwuZ\qc3]?lCfhƇXYD 4;AG5/LZIbAGnhr@\*u>Ke;Ru΀x?g $pTk"AqJ)]gpkF8ګկ9b9ŌJn-|fIkmH _. ~T d n/$邠ؒW.*lY dkvU`N1§> l=|̕ҁApѹV5A^s<0R?C&YGkI(΀ L%ef*-Q$"$g@BuL0-#ݴx򙀆ptͱB27XCVPa"h(q+ S(eL>-\):L)qP\̪XW h1Ǝ38;gX)3jkJJ%FH.zr* dKor\KϹ_f<Ό*ݥjf˅Wsʅ7iuqGצŧElOMjz:tX~L0.KɆR¸gK.m="3@))66QCАpuEthlnj;̦i kw6;em1kG X1!佐zQaaAGbҡB "X|X6>8!&eQ6jT3_g>hTg> |ȋ/‹}fǾ|(:Cu8/=?xq#n*jy֡#(azEi44ղč8$: rrΙL8a˷S0gtIsp!`NS¨XN`l6$n7"uL\p`.#BB+Z \ANQ(:&902 mOECz@M>~!섚Rx342-&e*ƒb! 2 Z$BI &ѕGL7G"7U-2х@cumMשwّ7m/ܿpGkDR@q0p!M.k=*qCYR!k$~i}t(Nty4,GJp wa;l_y.G$V?/getuK~I8WluU泳dpzBr|9m,N*{a;w%ʧwŹyOt}2@ Do]tDCyE p/ƫhft{>|=jwp;Pn5/vi^ZDOplk[}]m#]aoLi85m [|mMJ&ռ'̧8fϨy{ >~gT-M4 n]Ѝ]߶yB~[~᯳JvN5rnr%bV!gT_L'/^Yj!vH᜛ww\'{&R'@/ ZG%l@;ջ! 1'&m,VӤFcMuO;nl݌qfݚ˭W2ٖMI g2ܳsҥXJ( BZsCwi,M !qץEMg/:8 lI!(G(^UAxmx Ntf8 5~T&GhrR/7z,3=4٭^{p-C¥'ѯ BGB?,OGD__ZF q jXWg22{U[`xUS1'Ҩ1Z*(9ʱּFG7h+0v {x!q]6;]x@T YkIuzrL=3,תFnnbl^~ئOф.዇?b~c֫ x[֛ LPbF}U%@Cmh5"O u`Rՠc3fYqjK*yL[c!ߣ?f NkJ=$d`21@sZЙg!(6+.y,2#="4.1qVR2TZJ$ s]١łʚ`(pBH&{py۪fOƺ% en\^=bDH%#/nkIh{ͼ|>gf|׮ ZqFf6OkUͦZ_VE|X1𥉉?أ Uxe*X} |v]ю! FǛ{dv޺pe==޻'NIJވaY(vhF>"$N9 ӝK\R(G(L1r PQwXD[6L;wtiOokxOV;V1_GKu,56b@Zshc\!bZinJGK ZjR J-4B`0tEp5 ]o,JgFztZY>8v CWWutZ3>ER+l]\;c;D%HW0J;"Z  ]!Z'z]J`#]A23v+Վhs}+B)ƭHW+DW UC+BzOWR3;[+'Ŧ#j7gک\7~4ӓ Ǎicɢu)|1՟S~Sgխ^3Sqv9tJqv@_\,'MGY3ݫ0='k4Yy4߿80Wiɚ&d:[,'qqU+)~c G#9ՙ :ӆ(D~)?SP0ӼOJk@\}>'tݎE FlIKƱMb7iM{'MyMcTK__(=S_݁ +RӵԔzחyn:.#C"FY Vy^gT:"D0>kƉ8/"֓Av(CQ͢R팂:Z+NWX}unH3\C5}-/ǒ^ IzcկLWOCkPnF t#];= " .gb(tEh;]JaFzt%v 5L.Uw"(#]+DW؂ ]\5cКӕth E'8ɸ ]!\.P NWRnxtvJ9-]`CW FRvB)`7HWY̮ :fBWUrԮ"]Y9 á++`+B+{]ʾ0ƮL#ggg)M-NU>W/a|k(d]I\P~N~yt\nDGLZlCm5dj﹫'0㸭o5uv퀻RR6za~Evs^rE!m7햬u%j=(hk9Ḭ9?oSvCWS9sǒK}`\oǤ6C:@ |訄V{P{-yoÈ=2NU^ڇT-ɭhH\KBi I9$EC؞ڻ׾1>8I}Un3;)S>cnInV2ݚ,\-RBG~Q*dNfA1ZwQHF#ة_Ԟ~Q].*?K.i}FSLO!ol_O}+,%>2)@A:CYXGE, .6CjJ JTe)P,m$+9f͘DpDV NƭsGAK:hm;Id\> 9/Ǻ/TG<>^]?^ 7`믧oL2/j9_}y7wHRPhRN6a$oӫ~$oo}D$O񐼳d8 8gN =P:eue0!YRBo$" bQ$)-"Xl;(" }s 1V)KQ2Qi2N82BJ69;ͧz-nf3{>sUIfg# ).ϖ)GO_/=fCb3br* PZa@%)\NZJU1hA<+MuE8LeH*%6 %-L}M^ك,D6u8ueL2MӬP?S:9f8emQ:JRT-%].<&?|g-1m VD"ZD*  H eT= (ʱ,}DGh\RWŤcc1:)$@ )/E 1E%вʕ޹tlD:Vch91е_1B~Cl:8&w;󻽉e :_u?|[s3 \h8~[Aw& 0D .N89O>o9.wϬq2ٜ|eX%IGLQ mQB&tK+@[+[1iR苯 ؤD(!V+ j h&q:FęԙipJ'ޭxe9~?Vۜ 2QڝҠ`oiI}~B$6ku W,C B"AekUS.J$M4uz".ox֫wGtGЗL}VYA@Yxa#S#Q m@R9(ol1T )=hfE*p+b$Clm 8':ܖSw\' ;riӼgГnS==ﭩΦU+w#+Քв ǚ\ؠI*`YY"IuX\'64-0#DI,?/BAXBRP3$#+UI"h%q ˳~=^.exկ_ک8lި<`WL{Y{Noew.S"ipS2)ʔT)Oh?׋#tSQ҂WMX}8сWR:3|NeOՒg;26j%\$րl ne[Ld+muF:[Qs\QjgMgG}˺}oNX3\xOm_j^wlP^>_~[̆Bw8$}oo vdjq,Vۥ=X| *]l^;*WKHމe2kT^L f7" \E7RsTu0bQJkbaj9!7WiVƹXcUQ:O_i}~X}Y!777ƈ=IK,9d ¶P2J"r[sq1;&XP81jÄ=(/5*o$% LX!夁J B(aDŽJ23Ci؊N7k ֶmEpyas#eV-'].t1uP:Öw>JAaIatI;ițp[㛸<4F⿷zo;vmtЋ6Gzr*XdV L90O=%OyTҔI4 p  7;}ӻOG_o{oQ^Ώ 넩ZdZy𨅯 B`0^vVf*{:PfO0;C Y,i`2Q$*jUަ]6>d 3IDl\<_zSKD)` {,+V2kat5V%Sy9f`? ՚?X1OUfqU?Z|aKWLj00CGX#&sSH`K.ÜE=ዀ)Z(oH:O3=lȣ'v @u ʽ# e c FA.&iBBR5[95l Dr7Q6SZ qdP/"E-oB6K-@*o ݪ6at&`s1&_9"Z{Q 3/3`дfIh<94͊Շ_ېMW^֗w.MFw9򖖫K}0q%K[3I$s\R;W]F=^/}şe_iquA\> |ZlI,zA^V'OI읒&O?Vw1Μ6`ŎdAx qoADA_Mom0}Tђnf9֜9>}<N|3 ϐgOW o]GM?N+?EtN} WwnGQ>hCI]ڡq=m=uhH^2PX_jXK5"C97c(BIulN?~+tmtEB锠?5-_bCoJ9&:>:6_=78 Eڤ~m^H`|O>My7#VZ/5MrA[}~ ›1yklv,b>Oo̎"R5I#)RkHsXLໜQI>!&.MqE [aQB3 „`x+(LN_Ut^Rt*&t$T)G/U ^]\u+t7 =ƹ5H*M兣Ǫ8MTӥ'M]L<_]qQz~H r1g+3BDIFA)'rs͘s߽E#Ґj1& H)zLCJI .&XKZ1 D}aTOsؤD(!B$KҚ68e Bk\c{kBo񏩓j iM^=C>&N[[ٜ\ww7qN7cL[[ ,t3E;0.vL]f"C@5]'BQ'BeV%joUPC(*ɛ`*9)4^& Bm TbSbLK|O7|Yfq+UJ6J"b $! \3-ћiN 0ʪZOڧp~eb:Z/mjQae]vB{ەBt!w,HB;"( ZAT16kZlb+GtdYlmٙPފ3O1g'9Jxa= 1|=䈠 y+sl2yCm˗ >1RvѲZ) JyE$>&:2Ld3l 8re2+4xQټ{<yWs2CujFJu͜l8II-m3kp2SBt1 (?{Oq_◵+Uݬݸ6b+`sU}R\T{^ @!0 rTHa=H)ʕt8{l=-QpGggr,CL8S} C "x\ӠTN(a@HdDF@DRQ A\@H5pYߟBAThQ-ᔄh"&6EEH $O{AXm$OkF݀O?5h IŜN"& <.79ɠNjMr~g'uvҜx;oG%)#^p(LXhͨq!vqPOĖO62vR7kM4 L5Ƒ."&u< J{2pN,q.RpQ[{zt =9k E%좶IGeDRIF5K@tTRxJE Q0uAG\+=|k-uCD'$ Vg;9Ð21:\!?7+)e6kɽKN' ǹeU 8}$/IC^2#lE*covxi_REYЍvG߿n2_hgcw/tS @ 7mpE3eI}y䈍^rr[ i  l0UZP~l%3T<ߖWJlc7v].3:E5yoV6`vPE13& 7?PTcVT<.y弳:u4~<.j-Ocn=6n&CXp|Aɕe.źil#K6J!^ ƙծmϷ{W~$!ZN.ߙh'i ]^Zn02&zeW =gl=hXt}KXHN;hm졽Mr{|( ݆"]EaPLa\ۧEq۰əwݱ뎾:y}U%<{XCuyf#dnWIrM̹W}P >6R b:u(T'R8027Ρ4mA 3 'Z 8ecjS w&獳ZRH2"iQɝyv8Ir6W$c9$'ϭ$oi7Bś?fRD`)4ך魯 3Ňp3W;%ϙG4.} Yњ/lh+|7fWZ"ށ1geWN"yq=H{b9xwMmָiPb?ql3w3OlZ _~1u^{)ftKOSk 2$$N1\n`)"#tz!S3+ۿ9LA].gJ sD-6`M@ZXP\Ta5L[47]JiF ,&Qayd#9kc{xڟ$L9*m#+nx/\q$0<$zxq@~?J >\Yjxg=4Sla20Nb4;h@Ȳ5pv# In1VDs4h>Z\3ŵ,-:0X%PJaXi`a?̢Q}TV% J-c ./qB)PʀStie$XLjU_.h"fcI3A0ɵ;oB!g̀D1gz4‚S I1'm-=ˤGhxחZfz^y~(Yq-Ɖ64^*TpI+1ąDDjbcbM0&;YwwiZ\WW|S*zkЮE|>#E(l;M4-Lɣݵ@oX)ЛX &h< J/@/! ë><>~-Xr+w ci3%Z0"Bt~r$:ɭj5GrS6V_,FS9x duX`Ҥ #S1b$SaLØJF65 j5ND`B<UX-zg՘ID>g豉hj* iq"0͇Tsb۔t| D .S+37ZOSY'S[SEF9LLn ds&sp󩇣ismx8kn{A ݇_ʑIvSY?)Lat }ɆIc67ݼw70eͧo{vkc #LJpx|ߜݮ ΃,0V㮾abo,W}ª VlMک ߎy28m}pWsX\:W)#D6ͬ)BGZURK,O] ޞluujF.)e/ָr4~+<ݍ7aR`~*Ljd<)VػhA #((~:Kvvˋ75.箊8vWtӨmZʄ2a^!X&HSE57V"FMsFV;|x9:8Tj`PdOL1a(S&G Rt4 Zrv/0a49"vcʎ]%p?vURv%+NUXaW.AXUCgW JF:vٕH5L)4>|D*+OPRܱȮ$ x.=*A+wcW/]q$cj8>*o z:멿嫫0]۬tɕfޖF\…6;BVI Y.n2oϧq<9M7G#3}_HUyFHE!5X-yF:H*0f~x.!(qq18~t ua6mYhij}Hu7Ö6qJ[I4߳L2曔?i^=ь-CMK>9['5wL%ueɚ굃y(I9BfA >xx 3ʵo |MVϓi2LQ0MddC3wPt)^ASθBgu7%oO+Y@?v^uZ"֚VORjF bLM4&b\i[$_Zu\wσUt#6{{>#&[>GF|xS.Dn$Igs#^Ǘ@K;ŦEgb0Oe%)qդEt #T#h1n&(uwM\bvm6W֤2aKjNs˪Q ,w9%)Mo~Jڃ6QK-9+R 3\էT7\0sGp`M޹偭yp:+3S"Up9T bP>u3pW/kȌsK/35UlWryWvj}wwDkhn6 {5mLɯU`d X\viY 6B$ϕ.Q|U׳ߠU H rn 'B$^TjoBQ-$}j[Jsmh$]MH8r_D^pSmb-]MfE=głP d\ջx7W϶έ1v7h93 ~ӛh?Pn?l:E{;|;f ыŒ(Kx GYwGosBuEHIXxZWx8`|rwha̕J!ooKn-kPoAx;2צƆ^h(Tٜɼy1Gu XzL&-D;8LV,]`̫br45fDuтV^}? lu3.'ɺɮgH}ᐁݪ8m:Nµ sqD>(:n4 \|4WzO~F>ыSDC=q<àià!,DǮ=zLz +fhK\ԃ:(BRǮ^ZjqD VL JwmIW>.1Ǝa. ` 퇼ZH )^̿oDQ)TIVwY'+3NK1 ZyBtutŅdRDWb p"خ$JxI VCWWbA@:]JMyOW'HWRXiJ+8-n9+DڮJJB(JIjIIA̶ bADkT Q~tޱQow`AرG"XKC:FWz=]TiҁE~0G[tp---=P2 SDWb *^ ]!ZNWҨND S+H1 t(Jh%;0h/b V ]!Zyu(}ӡ+*yAtap-.~]!c( 3xt $u,'3pբv@iJ5eB+D8ӡ+c%+,y9tp5-F]ZsAɉ]&]v<)*T3}䙤R&޺dz[*lcEUǏ`;.:zIel鼒HS9[{^SA=u_:}^]VhbۣZHIS(g-'h?Q>N~<9cӛ3J,c ]`]!\K+@i}$Z {+hsΉ=GEAݺfz)-fǩۦ^+#2>epBqAf!\VLλ9R^7`wlz{F8'PX>rTf?H셒v-e+աMODWbrM}~hk~(vTOWoBW[CJRWXb *V ]n/+DIuOW'HW\pfMAt*5t(=] ] VBVBW;RNDR b k'PZ ҕ\[Q]`m1tp%)]+@iIҕ,gSV3py1vD+I Q>vuteP>m¥eWW˞NլԍL耲sΩ:r [^oKi!{UҤt^~qiP]18ͽ_SjYnX߲%1W+<kGcKJ`[PbV Z!J߉'٥3G>eo+h\=}*rY{l||BTޞiz b 2R ]!ZaNWRɞN-L m#3^ |~:+T`ċK_9Xf57ތl ;L?^QC0ڬ7}kjTˁ}d)Lcm %/}\ELB,n\Fی?zޕ}?~Sv+7X\|_ҿy.:=\_껐^Vaki#·K/ O@7[}x'o\?us~kBInU#Gׇ?jz]}[#wVL9|]_ n&uu/&j:\]=zؒ07|>}-ǨMR.5{ٹybNUu+H-4pLDsJ>Kl"} ڨ<QJOuLyW| l1Mth`\EG f<ؚ?]F 9LoƱ}i20wψCAg%<d٦_'&ϓf8M\>gg-:y5 *7;buY\Vowo|l]Ū᭛ AC )Й׳u<$²G=\]՜㵷xbg7nfm[cޅ+)ޡKFC}J~|zZP @矸B R< (?'1~Ζţ@Hr˾e(tq}}5?)gV >5kJXx,6FKswIyl _ey-Pú{\ g&1?Q}蹠?@ږTk2xO߂JY ]L%UoKH)j= ?VƘfj 21ăՁ,0ɥYbAiOEu8b]TPP-2W!FS&F ͙Brc.6Jb,;A]`[ΑzH1~oz;y¡g6WWc7I6j*x(Gc5͕-ZY0'U>"z:n P&@zc$@ҾUfs<:ħ( d~Ev(s|7A0-d)sj|L.- {)93HT ƹLl58 Кgޅj6K'7ϫO!qŃjw^$PJ=`< *R!s“@ "InjD6^q)׊/mm%1R]9(Nսcn#1y\$K;j'鿭R;-zߕSʛi仂R(bH'URpe%č2 5=6ӹPʀ䱵fD+bp OP2S>6Jͼayqh4&99iIR2iLh2Z ASZd%2Y,\N|l 9Cb=F,ƗQN[tn4Vq4{Fq*qe̖,?~#pR<~|Ѩu#_[7v)Yd\ ̕D ɑy딋a<88S\hۚ.#ς{;FH24cJ" \gMҚ-1S2ÝIvGHG2X9LYB#'j UɂY(,;,K#&% õOݲgG;w`fuGTG–v1w].m. p#0ҞI1=<%^w76M;ށuF'&&>dP{ŀ3/NƤd* VkR+\{?Vadt>WRcF" dF% L`rgTrSc.G+1O7W/{lwM6d6FmryU_e{~^2L iU#$ 4d"4A&0( 53i8%g!dN<^DR`hyg.r閍oe q*KmVP2jfl6sd0Ȓ ^!ݛGcU^}uCӲuS q` n<~NJKnR O69d5\lB%יJ߮fk'ΘJTp!_n+L{ȒAxIw[FH=\Jp4jc3H\JWpv ,EmܺGzJO;; 0ȾY|n_UknupёԑcRG˾?[S'RqKy%Yĝ0i ,Om .ZP)鹛c)bv^D掅A8Sr2Nd#֋(5 *rkiݘ8K2F" PpA3s Vx9Zpޗڂv)OvfY߯H!t:F% $Hppt%Dl4]3dP ouZw`^eLZKІZ*2X)`Ɂ!$x}= :oi-W{c]3]awtڥL879(JsD9itF(I{[ MIÐ Dp)aFiKCdڙnpI!kHdDczB4,NTF*YE ͸'s<U ZFIsNorbΈLקt.pEs &s64|9sO(oxcT!m-ꕺ0Uu<z_L}m!|̒A/^.j4\ 1,`X۬qWPra\y5oc:Np\0FǰƷxV9n4/8Vn19PB΂[m~I) zl0(bH땩FvAdtC#L9[ )(ԝ vzwV`> S]4?@rD-6`M@ZX .Hxsb` 'c#JJY`f2~Dya*Q%|eÅn>X %a+;Yۙ vz67du>)t}m )2|Q\H4MQ gHGJp}8|HҲ%e}磿0jՈyx`vOJܰ7K3Nw@,`gAGIABTpR]@ 4׀Ħ7`'8Zh$wh38ek@v# If1VDx04g14B6Fn*cn~^:\ Q_%UyZ8iX;ZVGQQ)FuqE'S@S 0taE$XLjuԡX`X{wR>OV@V;Z# ևuڢ8nvvS%CLrp B($<P՞k:FY^1T&@o%kO LcۦϮfϳNSo:Z+"IfQں'x.yjwlߓ^ tC]_ϒiyI/mo@9{Y}NS6ڝk_obJLf b*[+DT@"k1Щe=W=z{D(EIρ11*֥,&Mr02E9LƄV`=6R- x$ )[X1c|u=FMFS4O͙D5S} 2q앏jyWAx&)PYm¥TeYkIxCIKMH=XՙcمW=_:[#u诮۴rBR,L"BN^ 8^{l(]@tQ_= /gh.)%/gK!hĜo OfA4{ɫ : +Zj1X;ɐxp;NթNEN h2.jKk\XοܝTaR?FSn+xd!R&R/5eDDL ` Xy~R?O;6A};6I\aG5R䗉%|/\Dlr쵲2aDD&p TQ͍QmQ'(CG7.=ӫJT:,¼ C(NjLL+WVksq1P0oQX; v|8u<թ#CdSGq'#uQD"4 [,4*$=FJhaSܧ-ݣfaI2?'"Fd - aI8PDTCټ(OX 9NOvYX$xUaaR"r'՜9.Ua%ɑ%HD!}(ޔ&+*wJhÍIBvbl0z&id(,}tVdJF(EZJH1 N9ťо93@*72fkadU%ba欋Fp!muqKaGtM.Pty:)W8b[i:̕z$bIeT)N"9L"ziLFÐ= P'K(IItY$R:"*3bgkaĶR1٤X֙Q[=0}P D34sKxpJ‚pF;-s aBg 4d`h5> L('8#ȁRɝ?<[saor9!c_1|EĦDlVi8b-tj$v( M@Q˜%BQIFQYQ+ t: Ą302N h҄Ij$%j]ٚs?_j X/_֙MJE.%.@=>)\TG ؃%H0"\< .IDZxH2!9Oaoco>'9U$l:Y-j-38= %KQEq:ETsShpzZ/O^?YM75<MLISW)JJf2e $u_ܜ'_86>ŘQDZNlz(AuaWq}kz{[#p~9[@, 5!*TH--MiVq@V1NI^atTD/@0&/4֡]43|P{ *B50b"ÛBp)68SDcDp`-NXZYh6dlpz; bJSz[wBy/Ru- fv@[M5/өmKfƁQr{PcVF =J'|F[cSJId)-< =|kC D It1DvhfMTZHz7 OItݸi  8Ưo~)`!#clM)(Iϣ68V}UoX}bݥ-k+A:Ԩ'ܚn:FsFfK9@<-U&p7 FG2"̭gi@]4N&&H6鲿#3#`״lkG'YvY̧wziJ+:|y8|`ŗa[͗wo%LaxU0 qwW(FE2<-> 8ӯP(.ףƞ@8\̷պK?WL[<ΕAN.Q. [}j&bղ톈MC.Ok'"4#̪$ջ|7i[#Vy-LYl=;0HvzҥzN aOG[݂t٥Rukz K}DZpP>Wh㶊0}Vy[wqmTVcr|0߹`(ޫt쉔J)c*1*G͛N]=~.N߷5-My|h#sHd,K.I}q3<s)] Ndh-9!щ{rr۠:vSMj~o% PJAl*2mjL,ŤIG驹똼i]œe'ieKOa5Zk[qgpdv8Ƞ@Fb%!62*] Ig %엗бa%,qx1=FoEFPw!q0gE@Tj"l *e$.^eX,z'lL$d^^{zפ(Jv+ݎ\ vtrWsLqտ2\&)G`P KwD|}`peN#M9vD[$֢\` <Ȃ6υZ&R:v@FY; ZDl FgqoR0#4'(xAcB >GFY]/Tm?FuzYYԣQAjSr ؜.F rr~ 5t3^c|tqE-GQ" &+@cn1 v:)I]e(%2cRuRqryl0 $zJ)bq霳)mAhm$"@(JAE !I%/ Pj4vCP V{' ά&gkNcӛ2eal{y RvԒzEZ֏ǡmE` i0m|Ą6NxNb&.`:K,#Z͜t ap) (P4pr-#2\?{ײq$Ɍ|E$tYI.{Xe|0bF"x.vp3(L)HΑ{E,2%Sz3b=G1wV%gP9jRD#wH,"HfkJ@(l۔wQlcͱʱuoDk ښ%5W<1o/ۋdR7曾]ƃd+L61Zv>+S\Whg:K![/_ޤX&\b h}8tKDZh6DW$ ٭DjdV:Bb#aCtjQWmCU]#]I2lJ7CWKL[+OW@Ïݣ0piGw|:|ʻL?=Ӄ<^ٝ'&|p]w_|G.gx8?C;#ggGO4ȌWg5p?\t7;[q2Syr=q󯓋˵!wg-WNm5yEw䞷qϫ$.9Cy"'ӝ3y ,7Pnf>&~y4?'w);i1|m˙.P̝)䶤&`ά̄+ݐ7n(hR GO_Y<] 8upD~] e26+C+Rz룧l ;6CW[Kʼ%;#+g5fCt5ՄVj{t5Q&tutM&m&7CW.VjM {xU7EW]n]Ek*ZJW@W[Bz.@ f+ɿN^ \-,Y΢??ϝEb̻(vWZ29sw[طhzzBs=yWgBrk>=m0_fn3!v⺼#9D(>ǎ!G꒙OeTԐ {O̧ ò8 E\_96g|a}CB9rڟoz^ 2:Eq-jP,GgE1(b|7޽qW7,#Dgho Co?_7وwt hGjL[]FalJ/h=dh<7]_ZdK.;RaRΙ j 6p.ڜ;F~i1׷^[<=l &˝(ZD˽V{SrUyQ^-ScK3O e{m1E9C4ن06[*eQa-% "vas17!̪Jlz>ږkC (%5S6%1#FHTT8 ύ:r?XJ&chZ`95qʣ`OɭZՉ7PzkpUGy*dnYpD"XXudа(! s'[GuivtOdF,. WHJ1y0`=gafRm>`Z y*.|i@EחqjϾ!Pb8 yqTplƥXX$^Sqah=(g0*^v]*+`Ֆ`l(հQnCAEۊonջޠ|pW.#Z` J{ o|2oM(pm b|SZ, |$B|iR e@iZh7=V8(&` uA Av%= 5 Kz- #xq2F- @4&5 ,r<WM0 rH +w{oѢA ŷ]Y"l24(B@8 :zoߤMp` ĐjOKߪ#;Uy$͑ :j8 )[U,(F+8} J*HOzmQP2t)̫~KȎP.JĀA `3d1! DٷAJgF]+c#Czgy碐A)Cz& nfwX4ğO(NovU}lvH'"4BK!}lD Q鴽>rVT.Gsp.0`!h! LGHubGPT8xi -6g% Cې@hͮ2AS4<Q"j/ ˯ -tA\V24E6O0Cӂk+ ;/0ҥe9ѼHΌ+6C e9(w2DhGS`JrC[џW=!Vwto<&[}YAb; &d=cȋU4 Q w}q*2|GQ} $$¼ j ki* mM3:Z]\L>(:Ax*O&)CFKtAK<yx`Ȝl%"i2XTY$V8?z,:+0Bd x3 J};QSsε\FA7h? #Sdj@gV4ZPo [3pE`!-G[퐄͸Dݡ> WshV CficYx.<\t<_^^;΍3!V B=0x@ˑf=W{bMhS `OM ~w6YЍ~]qwId0Bo7a cf\_[..sc.e᝷!ປb;ab'J#̎ 4sR': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N u(D fKNЙ@Kn3N @@X@bN uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R';x*-9rs؎ q3N  4% tN=R': N uH@R': N uH@R': N uH@R': N uH@R': N uH@R': N uK=Kߟϥ7  w]^?ϯn bq2.3_mƸ6և7.etQK`\GBM 8fjhU$Bátut䢧 ~3t5 ]MJ6-5\ ]MBWmxrgC(99]5t =qB#iJֲSYb=]=㭴+lmg2pChW -+DY[*c*!BFwU]+$!+DYKgx] Uqn=]!Jczzt%UuC ڮՄ%%=]BGWqBwr<ԽztK+XwV.g]+D+t Q^]FS]S++;i=]JNDHWxF3钺BtL ѶD_|te %V9vhcb׆B{.&`*i"({O4vRKND2Μ64߸s0]Jugr݀hEu ߌnv=!<}FQ3k^Rr⌢fh= ]5C)ZD tخ¾lt93+{%fh;{t(۶DՋ)!坡+D+I Q*++.ҪCt5ѝ+KuW rvBt JHeHA̻0]+DZOW%J++)-]+|򓜚q-%#Z~OWrf3tpyg#>v*J+EN $2ZAM Q~e5ҕQCtjGt-o=]!Jه_#]YHm/|L؀T#!jΩrB^I5kWQj+5v`_T6FAyTNPg'դg%18ʾDxB@uœʶ{rܷɉ^ f'? \x5'Jjk"Е]O\uU3~[g#dWHWLk:^3Tu-oBt k^tS - ڮՄ%=]BZ%BRu_ m+DilOW̐LtOiT4CzuJ%m +֝A{I5C~BmՋЕ6F.9 Ψ+@ۓNW^!]KL m "\BWVWHW0e['5 P{  MEּ̫OJlN>ɬT5.Y>Сu$J㻚GjU~Buciqv׬J Ms j.G'WX?x\D2n)DyfOX= ,Yռi3uTZ~br~j2yrgf`#߽/xJp|,n3P(6ߏSMƿx|D\<QZe]τ"ūɪ>Y~粢T~aSMX1ʔJ*˔ȹc! )9Aҋ`"JMZeS+ws>0ǻ"e1]<CT\8&\w.%\hxp{c8981i52DqL'FR=W5u5۸ͦ k- VlZ~CW)LVyFuiTgww.bSCS 5 9TGs ccby,L p]4[M\4`Z<mAPM bVk:K=OTHJ\&~a\2!:ir¹eR%uyD9itF; $揮_o/552H\u4X4/xc侇pDŐe& \+)CWOn&aloxy@H.<Mjk82ߐ%tYU6Xņ О箜(e?~n>#^N k/ZY*Du{,#0Vq&4y53 ^9,§+H .ee8li70m\mW/h-5vk0ǟFwypGh2ͮ˦"2^=(zL]mͿp*ܨ}ͳ$$B<0 K+sݗJD\IFRN4 ^Z = Z~iSZ6 )SC5T}{YՍ<8'A@5y h3EZ`vðTzr4%\[.gM-LogIZY=nb{J *ږGogaJ( 5e TIw+Zɀ:GNj)  Ej9_feyL1f= V{h ~l$ezH?:$?+=%Vr^7ux4"?XT.p*5fha鲔]ɘxGёzB.Hg%xON28pI Iܐ{.wO8t]e%Ƨ6K1<^^Yh+s ~iXvu y@msM=zp ZZ@VyĽ"&yNy4$ <kM㸷T>Iۯ.R-.̮&dy >_` p sJ`pj !w]]_/ӏM ,촸{tV~%$B/o9BX|i~{Nu 6W*b|5a\`+>½T?n}$HEь)Tv(;{){w"'=h9˻oJ6UxGlW^9n q/Q"ќomTĀ "FE+1;.1kk{av߀[\MϊiYބT~BX`@ӳi2YNu&ںg։c)lrJsu!S9F- 4j iHpOg{#Eqx#@DYmnԵ'^,JZ 2$䁕a%2NEML&тKd֙F TN)'xF6VfZs&apr$`'="Yu g9۝<]z]b%`܍V7ӨLX炽0.t+TFB#A9|Zǧ`h'l=줞y5A'r)+I},{ »5V?{WGdO3I#H6` AGPXRi$5A fepVQ @F6)4<tZуS*prf3YTu7D%+&m6/7Q6zZ]G P6 fv'qMU/Ձ>18TΑ)+t1$[ B@0t0;=>}5X|x_]U ^NO:/>.[N>q-8,!]jb{ޙ 5=rJ7} fַU_ʵ9XggG'W}/G;&\nUhy[ޒکo>wixCsk o 8NEDA}u[Cpt6뮳ls:7x5e{_p ;^V?F=vz)ZIyXwnTnw1ݺL_לW[ i4xQ~цםԓ 4+n}乮;o9ev9C{#ذA>akÉ6fH8W釯nu]%v)ZںyXlvl2tN x>񀥶Hޤ?g:UoÃY0;~fylTtcg\q=& '+a5h'%unNݜw8cE0c*> wT[YNr_h͘5fk'9VbfZ2ۛ(UNi8K,2(cv|ZD%c)$yW?^Po Юdފ..UJ&(Z 8ɐ"P{/| 5&ATQ4l66li8z6ki@h[\8X+= JU|$I|7쎖`L{>s@?:Y\?&ڹO_l.K V J hVic3 ѩqoV0::5liQ$C $ kq0Z|2 yEd6obc#:奵m)/A %g@38X Jeꎀ69&0HAUi qK~D~xbσwshZ6o\lJA4<ֿI6lR.+U@"`ѻNxnPd}X . e:)u 2`B Zc 7 GH7y=Pg QWVJ 풱)h5,1$]y$AH*+#cܐ1G!e%9'&ii423.,1lfvܙN<7_G/c{e&TTKNi旰]Ӻwm%6_ݜzWc 1>$F*O)DURUE1zlj 0&e4B )EV[*:t ɒR0C{[7ygd/n+Qk2yx<|}(U#cjᔽEZJ*Ҕ0Yyhв@) 6wF(ЭB:GlL$.sY2:i +2,^RnUћ4.+~`ti1 Zr)fO4I Zc{d„R"\:c#9ZH2!7[סܐ ҁ<z)yxrqYS.V)lM#?-:Dbd .yO+ُ$9w΅)X+[υ|_ ,yUaG%3pZ.B4E HtYCT2$3됔R1[/X IsYWV[BF1쉙 G# <})V `'Jۯt̳E=|hĵ8EgDrs:NK;o4?*B@ӉyQeﵲkg}Y*k, R)9K/+L`hsPB))tE5Ue7ns_65v˞=M WC]붇z&-^n W)E=0I*TZ2Ⱦm*ɴ-`Jt1x$tp _/ bqIB(PvDh]ʜ(x>]u[zIm;pTn|%fir=:.B^p˪,;./u:U:U|FB28ʷI(;FT<(% '6ҀBT|Hx*d (֙]rDMN娫r{ 1zК@T cEU+Fh3sv$C6i{li#V_t"b%EE1NiMԎB(=LږԓK2 $V*cMՂ>QQݜa)ͫ^˫.j\l\KMh_?_NօA"ۼcW7=VLb5hd}x4*RcunSĪQ<Cmay&$/b7U>p&$rQ 9Pm.>Jfٕ* ֑9;l6jUf#cXF4Օotq/*2^#}Om/'^|^a< NO^^#vzR*q% a*|h+H7XQɶSEcd/d)J%kTh:[-NEjOcITDu[+sG8;w(Xvѱo`7M"/z!kNK %YK)TjBIKExX ڐ3!32,:)k2HJ 9F!bHLɆk{RRޭq4jǞQq_a)"#X(OMIP%ZIDlASV(w V6J+-l,5INV 1Ehf1"cu6%EhaS\( .#l3$)sa$@fT2 !1gcFǾP6{>C(k#fe /XV=n/_ѩP>iw=ؠ O7s]).*:!-SSK 0/&.E8!dFQŒE2ʜCVfDw5iw1q^sck:Z`|Ґk!?}1/ I1lчP\Ѯ*@JE " B&mz|9p7ٚ2\;7 C˞ o^xP57|lU O#_9vv t6D%+rb1ϴˌ\DNČHČb+)8X)BH(X }cRzBLH.H!jmw]8ݽŚ _z[t瞗F^Y2 ȪYuַ b4 nUyON0]CgZglj{UnqG/aW ?t6~?g؟~4sK>_~Yy8Ӭtw bU"%s=snpOK\Ou q=áck-cӻIw7P퀧>6 ~:`yo~*WXշ˥L| ?ˬ<;:Pfeƃϳ5Wqy~0?}WB+^I~ ]s"ԙ7JBNY(fyu` zۏ\ĎNďHď/3 uw,Qy*F$8 A0k$FVeI?PbA˓4۟-_|:4yʣ&G{: n" B,( 4Mhuq̠,U~Zm0z&Y랸3]\됻ض\,vs2>gj):6{گ2R.VE"tt4 sX=u>7N5N5u_,3RXa.gTv Y)Ը{6Vk1%c+Gr ᥴ| u@*H73gY>cjm꛴6e;mItDMS7bӓOkw22r!',%^b$4Fvq&S8rcэǏf9 Rm&* |= EA8bxa> Te(q( I1(;Lc(V7[&A'<\_oJa|je/l~w6ri4 H&gkݹj/8<ɋ﵁|MhnmRC%6Z:6T L1E(mDD=Z6:T+CB![V)aBMU}u7Y]*˨*c El(lh%j2nk'DHu'Z( [5,+eSO7+͸9 &QFmI 0M1Z"t5 6h,,^¿x^Q@{'V@,{mn>gJ@sFImz2/_2uՕ1o>/qPV> VŘFSw reӂ7-I6϶iQ2сwĨA"9MMw n{ j!( :&7LҼ|OV`RfMG!E 澐%-צ`o\גu^;o\8f/Rex ]|U:xs,@EN>[t3TWݷղu>$!ƪF>y)AS˻5w+%FffgݽkYf\,u^zE72>CvYHY[7hlS\Nw(CwvPN豙#:l6p6iȝ{?6e"o~>{vlC ^9LLxu (TsqڝpSb~7ni8wIO+kUֺH,vN}VZBB*OX *\V&Zv+|3IJWV6銁zVj9v]1>EWѕJte+5֙)]MPWF驺"`\4v]1.\tŴJ]WLUue:S'"\/e.bKbJ&+`g+>sV^WLi|u ewŴޏ]WD PSԕemQe/oEZ>6pHr` sdj'A]&'9H#r n?{A.+p;5&rU>KT@X? 1ggG>%] /xKeH^(1T1OtC/U ]Z6z۷=am~s. Ju7`{A}#Rq9}["R&k%K&#w\6m jhL72ZFr٬%*߹rVEoEwULm6RkkW5wy O?4!ْT궭UJ1H!ĤvNΆj):S[YV{Jyvs9Øl͏_|2\B=#Z-EvWs> 囯vno?\io,R׶ JPW1z'㒭Q|A95ms=Y=QfN5/|OL D`JY 4ddZ}QEWԕ2i@g<AjH7h%3Oc<>6sQ *-aJA +4i ZM/ n0Vt\@Mj<T lyDϸ.LG((连mPv{k5O/G;x~~dr=t劮vz0HW`0qEWLfbJo&+i^`}}qEWLkuŔh&+ ltŸV+}+\{N[t5]ig)"`+T6b\\tŴR]WLdue 2pq! *9v]1.]SԕW1v6UMW;w3)]SԕCe ']1'b\f Z}S>E]!Z̪̧qEWLkF]1-]SԕJK1%r&o2ʢ2n^xߍz˯~Yb[~KB.ι-urL f^9k>-9er˲a[xZdԒc+^פ}SjQZr_MKz<^I7^ X{GG%]3 Lݗ7]1cQJ0EWԕ(f+*]1ЫurbJTEWԕŽ`B?`p_vBtŴC G)Kt5E])"`i 2\tŴv}WL\ueStEm6"\OtŴr}WLmu FW t3cSw5E]d2gdq^ ->"JEWSԕʏwPG` o&zMMM3hz_o6 ZpQܜR= Z;?n`` +E63)MoP0ۡw<臋-ЏR왗+_tkxm}F"Xm_b\ i ]WLtu%Bi3w<胫i(λtE\6b\mrZҩ J+auN}W qEWL; S:[t5A]7銀Q+5t3cSPt5]Yς+4Eg>IWD៮+(EW/+*"`gE6b\MW;z1v]k]PKUo\%D67RjB;ݠ--Xr-=̓u.[-'tRkI'lЫ]W]EWJ7ULZrF*,Yӣj+̨%&'Ƶ IgZ}SzYZr_KKvYPhT/Xĕ'FW=)G6?Z[tKփ'r SO\ksӢrd^FWm1#]2]1C{5v]15EWԕ2( d+DW "iLijS銀+ƕ:]1cSYt5A]p銁FWkE.bځ&D[t5A]Y(MF"`o1]1W:9Ԑk>]MGWt3}z+v: kvbʑ-Zt2B|F"gߺ`b\MtŴvOҕuY?3H#)fӄ>XR ^?H2oI)үw"_̀2>?%T HyJ_'הO˴v%77 }`gǿ]7MB?쫦,yMr{_mWY|v˟ۜ+ޯo;—SHtf|oⰣyߦ$g?qmoܤݓD6X}%hcU:o(Mi楮kQҲ}6?fmJ>gq\7] =dv ,{?y<ˣϾIKP4S녛*ߚKvݫkj˚ves-vvP`VO8ߌoNՓﯧ[f= TB~~p?'HW(t=tEWf=q(2Rq_v$]1(EWԕDog+Cow~{z=v]Xt5A])/GK?`p=\tŴƎ]WL銮+Mtf+2]1P+tPt5A]wERa6b\sz?v]jrB銀ȧqi 2}W;SjYt5E]qUNѕ( #]n>A5b"J늮^KT`f5";R>f8㓓n$("wgo͛eߎ OW8Q\?{7BWQt 2{prr^y'm"PS;xG ' @gVoui!Jjy.B}n}0`헬>%4G&)BsgHw$6=$9luuۋ\7Ysզ:uY(/`TJw PJr>=7wB<A޴\s9 Sf0j$7.(]cpO^wF8'֤@{ur38:Akc0mƀ. %9*_*X )%$ڨfis"$U* 40Вj,ΚFR-+w2?VzaCKU+ܜ-ƻ E@P\HlG6lmMC *TD@5w˽AAQU6M Ü` U#\ "U+wIQ22)LHpm'xIiN +QY[4 U TW7"j,8H&;j! W۞dWNc@!7CAdܡQ l;󭋠@zNB€(!2] 4HAjTy>:LAZ dT j7X[y(y3@fih!\=+AtbE,N)(J892rNj fY>(Ez@?PiW ()>X|A*AŝETݳ. R^'? 2+= KV#5S":~j5s͐Ȳ(n i${jE}∽}ABufOA [X+f1*Tei5 !pBK.}?yą^^&^֜s>_Ղ`X*f=f~pvM&!jhX 3 b{PT8xi7@ϛБW%sJٷUe ìcJ>N䭦ps_~U]y @%8A.Ƹ_a(IKGa\'wl=H ܮrX5m(O1{A6dm fVʝ9{[}՗/VWGh7!A&(S/#KO~5iMsn/6 Up_]V?n5KL|"@uj} ~qA]ev__"w^pɓ !K6 Ѩ 37 yKN*}炙~Y;o߾W9nܪi;'!8D'fX-Cٗr 1~*',Y/#;~ƌe7:~kU&yiNԡ'YZxxR$z]^5lZphf"PGSd9hTr 8Tr[J%hM8JQJwSəG9U&RNWj/j??]퇒f2BW;7]}*]1`CW /mTNWIX̒+RڧS>R?PBWCW:aAteJCW ת/BWGHWdvAtŀ=-n0K+FQ&:BrDW [L1h>tbQ ]!])iovkVV] m z= ʾ[c-t нDHcb~”N:~>'g|>&#,gdrV%:3/4Ίnȡʹ_}#j^\㟙Bk]| %AWVs^_]1CW y6)e&#+ѝfAtŀ~p{~hc:tb) ]!]Y[R1ȀSX ].ڨd^ ]!]G-DWl \BWC ]!]%?d{c*tm);%{[Kq)o:٫]jA>iLxb512kQvJk奼nIKzPNJbv!0ZcA~?=rTE[O~s7=f/dPҁD{ ]}gp-iZ ]1\cBWԡ|/tu\@ެc|yfzU\U)Vuϡ6nꬦ|vmCE0<~JD:3xZ}'lyخ#$yzqՖ ?_!d\~r=e2XU~nCG=>jAÇ<3滷o?s7xW _>m? mb;i8dBTaNИRY^0Aj Q2aRk(c^wϹQ8@4EK(+٭+(ISGS(4)]E/-LK)I!Xn WF(>bj4b'ɡDX az } ],i3~ZvF9CrF|jޣٖogȟ.NDO-۽~2*ݎ l-3F jč;`M]C<.Yr:<[Dtja2ΖL9E?|>k2aznURf57TQ[r>vgy?šѸw0ܺkt2 ,7.at[XsF3܁<;p) fsT;>Oe`҉`pWyV?'hyovGopm} >IlkOҞNν}aVRгϮٺیFd rctfX7;e80H M8/#4KU#eYJsT%D SkA{V N=F#L@δJ[+^582g\)s7U}?iDٚ-cuQ%EkؚB fphI4 aI*LMSHLp# z<g$x>_p)N}]YB7У]cʔAPVFxR &leŔ!I:ѠY=|Ƶ^v|kzu1h:Zau:2K)Y -CD:F>˜(CDByTJ GL $Nrd2W8ÁH2%b 3bp լr™{8GVlk]Xe>7՛dhJQv^ᨦ%p :H:rCʝR[4O*8,aLxgXJ:s6y>LGF ;T+QJg}z\O|jwTg;l֨bW,/dYJA(oY>v3hܸ Q^IS6%VD)J=6hdP> JΏ͠GxH$@,NUٶDia!"1UvA8%3xC-@rR!Db:Q"u "J{ biPn#⦽b?ڣazh2G'*aot(rj5j'R׊Y0T(RhH܏l %ͿEp,£h%7^/ךPXQ[QnׇB]ĵU-R>8Y\xquMgUl }Myid^ͦMPQm_19oW);ȦbhlײxZw{ytNHL)"8SG"jڙhqT}alQdTF$S#uL3II8 QQEC%XL=6*Ű8 Ma,',|Q,\J6x^eNN5_5Y]ʍӏQ3k-p4.%M/$Rz0PGJ*{n`I`3dcE32lr^:mGMLXVmJǹg7b$ǂFǡm =n$=x, QhaE@^"! x 1pL%.x#$vIZ @+:pЈ5H@&LPH1Hՠ]5b֨Q ǂǁIYD;! 7q@Z4Mꨃh: т3!2""j 5$Rm8V{Nt^PHܠ^oD>62qG2r?Op/ D" Q&X)} 5dZTD0EW4jE*D, Sw+ ,Qk$*'gJ"kAtS) ΒԅGPg70L|'X͆<|wNndr拆0؁)g}D RLnEFWC0<vb>OxsB yx@\R壷<[!` L%#Xl>%d8յFW@U:G$>2#<͛))r4Q& 6:i}1? F[e}H-6qdhPO"E-- &!  Oij _95ѣHQa9As(|6:J4:!d6㇄it4I6_4gWD0$Mk! 79a\Ob_~ǻ F[CC>ҟ,G-_ }8 h~y~@ \LWծuvWf[:5{W#W;ďfԩd `KA.$ "j ho[o;O?mTQdvMt'.gYLJe}? GYڣ>K.{FO븻;6}Df?7|{eH+VcֿޔPnʗ]Ӯ)Yn 6e>%pX>`~o5:#?:hx &ot>RyWС:sP9x0)RDb4NϜAc.xkk-1,bF%zE@/.7982}$'MGUx#C)oԟtvGe.;{Q3mWskD`*!8D>qWk"O_fcA|`Ï6~ !XbYO@Kp@T<>^8᷸|L'̯g RNm#p^)V|>bҺ<oi}H`ZĜVRb|_o97|IfI? L6J "WiI)钼}7 "n}>G]hzy+ zp-u'}H7=o]#/E [R{-6hw_A~u AZix;NUo7 wMr aoi7aGnzS/E[zӕ?U1NuU7Z}И @J@]YhE]X%间6:nk1\GdmRf> `\;DMڠ79<5;v**g +g35*g~㦥QxhF 2Vl]Y/r,CV<VˈQ xѳQZ"R\Ne0Y2H$<dCph,&yRy) ܋"9^= cj/ZMB^v؏c~ !!8A9 ,[$qoS'uU/6Y>Tb,PMH H"SBh})JE\Pz,(/\(E";ϨJEfk(;I0El]bse 93iUZe[*!^0!Jr4n\m9{hZYcJwjUę^)DXlK|q& LQE` E1u!TEqRҨRZu K)ƊPަdQE@ueU (rTʢi1d,վp `)eԎX1vE*iVWC(4Gdn?N w4ǽWf8S c+^&IؓSsolts5Rl- J{W!+o& ǶqvG1CK#='(1'5XꉹKL|<=l"Qj^* K![K($h$(+Od >3zQ*%Y}~xu.A Rl]FJBD& MFFlHJh*,x/HeHQ)!LsISDY ʠryd햬s F:ʍ[ZՎv.d_dܖj%[aџĮx9Б4W#x +~;ސE@a͉2J2N5@KF ̉4g$Zq{~I</GP[PƬ2_7ww-7x;j4]6 TBdmr5!7X`j\Q3qO͕5!v8whe%k1#9H"mSI3 +X+ZKYu/,F7NWC7TF6^?%> P*%lbDf/Hlmo٩C>&]A3 >ȕPiݵ,4wa)$I2R [6Ti6q!T ˒neOm0D3~MbkoȢ!;L"rQ 4\bxUXf";_)9`bW*ʻ@o$G6Hk9 ݑ74GQOw26t!0}45f& ](%ExK^B W\!e_D2Hs Ns/yVdfՑZ uj4H|?hYGfp}{;Mgw^4Bcy3~zQy)Ir%9(hEp딘 Hd<1$榡Y1B]ûz@u@#;@}{au6KTu?v^;t[Tۤ4bVb+9 -2%1A℗8+&lVvEQ'{­dl\SAyszrtŃR2ƹ S'*t5 Wִ:.-eHwq7)\XhT|,6pJɽKa9g e@pp5YGn{yz{)5DXnsݖIVI5ЈAܓNF£XcG8WyieG-RS$?升PkZȘV&>hx,yŒX?X֋ ztܠ2p R])N{]ppe$ Woָۗ^*KSNc<P%ܟNSx˔,txtjsI1 +7 I,`3j$7~&&L'>l~ag0a-jI }< `[,Wc<\S'nl:c2n^xL˦ ua|`$J{ y[Gюh,.x>??h! VGvc,fY.;XP6g7nl1Ś5A~ 4:4}[B X=mܞ7;_]jD|9^h‟">m ^P;A۳ ہ;(pc} Xe +c6gVkoPG@- i1FAG΋SD"Ζ⍊h) 231-S!3W*!\M&IᗤMc}C_K6&?xWmtUAJpʑt*E2H *8R!JL}1`*X+ZNag-rp+:$Idʠӥ0%H,"QR0 [%pnu^#gڲLl|~,ǶA=o*xJ"B@/)r)MշNUJz-b3?:? c_w=z#e:Һd2BS$XSE]1eJr{8u E{TZrN} rTn>96)RhTvCe㾢7؍2SI]9 %LխeY-2s 1)3ㅦD(6[ՖgD'A\M46C=ww/^PG0]6 X<<9ښֻ|6KY?TI0DN :z ʴ+|Boǟ7Cx>_ 0fXϷQ b~ml9!V0:[/}~HO֍Gu_YòVe2[5OXqr8o\`v]<gGgYVfL.>o\vE[_zjc4޷[FR?ߝ^odzB^u"n}[Ю,SGlE\l ֮k8 w,Un nGnqfLqA4 a$d!OiOc浑a:oͯ{GW`%>3q78ܖ]7^Ck05 }gPڞЧ;3w 뀹\Q |l 'ׇ.͒oaէd+dA@d)1=sDJjkE,k{')J#Nv#íJK[':UqNtϓ5893Sϔ"WqE~`Qࢷ?mIN\\*AjYv>yFS*:fi Ls\)#o؛*|>j~: ++\RnpZ+KUd,u1W\nN\j%;vsT7W\9jxPv}SK(}lBLDtkovwAl8 UJ\P¯_~O"/qm/o5'dqt4,׼B^Lj?v3=f)N|0S(U66e(:ԤQ~Qw?݁X Uw r~`2)%=:EUGf>k~qxQ Azg 4 fB4~1=ݕYTe摿6p7;/ew/򧽒= CjV ٪![5dlUkUc٪![5dlՐUCjV ٪![5dl`@CjV ٪![5dl(H[XmCj ٪![5dlՐUCA=FɐnV ٪![5dlՐHB䍶)Cj4UCjV ٪![5dlՐUCjV ٪![5dlՐUw55UcupV ٪![5dlՐUCjV ٪W0nO >*hI}ώmMm}|]2n>)0I]!*i~/aF=Ha ݶI[%rGmACP1RrRh.x1VE0mTwݿOrV D8dBhYUͥ)<Ψz\(tn9zBTCS@KҺ V }^ Ug7yoh# )s֐Z-I V=Cdסϴ`k_ǘ =J1sqŖ.D#aB)Nx !\4B2TU"Vl׾!ֺ*騙RQL+yݏB3xp zK;o|ZȼܣaGRɛ@Zk?hC pES{%Ě瞇t"n;uӧ處x"dʣ*|$ B*ptx%ZGl 9`VhsP-b/J6 ML (`1Τ_ٜo .k#~Nr~=Y)$n٨!Pҥed]l#nZ '\aR\clHP 8/Bs>frv5R k홃@\/pEJ %$o8UM*e z~JoYU˻J7ey`,&_a7&^l:,gW2,0)xI9ɍև(4fv3u0v+{`8*STոORJ63Adm8n9ƀCڱgPQ$0JU) (iZmwsW`P.>*m U-x@T\%#?;~%@ҩ(N`!]RgGfZ!eq##` Zݛ!kojkT~|E-iK.YiuaYAx.: F\w\cTM?j4H)9̕#T X UT͜=j#a~⹽Fơ:BEcW_2>zؤlmֻ&?pvvty#vĈ\IQ5f!EJGRU:akƨٛ<﹉ed^ )6m6#p*NW\,JU w[/sbquGzFyo$,*T|S MK+j*L`C%keb﹵kI#t< t^AWoRu\>N &V rPf#[̮rp!q-~x}7l~w =fԓ@Qofw7Pl FvK=xsA?B>Sza\`M&(QiU6$W< CIXi0~HjZt'5MO)=k}WGYx9:=e<*w0$lyG5o-oEۖ]?fr>ʟRJNe}?]^5˿}ZC]\|p|zu%~L{/M,Mˣk}$7ԃʒy-M2qOp 6 [ 9 &H3J0gu6%tE|qζ]^Cs u 54y&sy}_Ƕs{s鸏H o9/fo i|]%?v"mev^p?j28=w??}k8C;O<1OE@-?Fm|^=?2!O5-,k3ZK!Brm&<ܕ,/Wi,kNvXG?~v9|9rwz;v6Ro|a֥_U [LJ#eN&w}Ɔ7Änj > -}__pu'K+c5cZ~r3g=Ac& b}drMv{}(Z] SU.ye aYB:lgxVml ޤ&!x%(tREdfE喃z]^Orr6zzC{L:MŅUoLQj?'WFpWGⷣջIa&{h &U5f Tt.r0e R[j0zvj0fFj0fj05Xq:GUyi jZ wy'96z2 7kְ5*u1k8i*쫎`\6XcR  2g$U\?[q?w16?shyA% ,OvHAx[@]w@Wɒg >]2[&|@b2y2d}Nf<+; Y9YYYUjáF>uВmke+ڢN&;h̡UK- OkAqwz۽(l?$lm)5bFTd3ϭl3 zPpUuξ,սgDzMQ}ݼepZ!RNr yCztuKBGKtO@O:dK /ؔ7cS:ԖՊHd1$_4(T)k B5L& P`S :M eB[BS1yBV}pBYk$V"+Ų'0FHѩ,W7sܖ:#p渱)i˷-4CO6_yh.,Rrۏsf>r-{5_>|6)~;^M(BNV`i|2D>d" >&룸).ޯ3gɃe+-̥T ryM0rHΜYlnAUr0YiET V #R)(u,FϺwf;jko|i/E@18*C!֐tL h++]m$)B p%[ %XLjM {so`l|rw?{WȍOa{"9ާ K6,K${Y_ƒ52e؎bYŧŧleo1z5/ keI6eD'@dWJ6Vގ0]s#/cZ:%bGؾ燣e~Ii0)( b:yrɭzP~w6?6z½'RvUyRy ҋV2`$k;F+qn>8zf9SAq郂dѵ4k0dsIko˟VtftYqu?bͶ/E;d4?haz5eDLfoDmח[w#ZYoEM.F ?es:6exӅJsVZDnB4Q))f9WMf%snOYF朒ZÄtb)HȻ S;Yl: BI,IeE,FH(3a2XRZx֚io|Nevw˒ f-%u.[Ę@3@0Bb1NZuII`3YjoSڨIPYA Ҿf~3JYR2YXd)H%CtzIxbeeF5nY+W~ou"ǹgb\c@xcS$cM^HBm[j4UHC.xӲ_NNfJ ҷ}/`ghn  ^ }2xF۠FEd&m&M)\ lKD w>y/$W(q=+:էRI 0-NMfv V5|- ziA-d ъՈٗ&6FXɴk'ZtJHQ`( =bxt1]RRQ5HrhE$K3Z5ZA ng<mjj-ʘ>-NGWv.NV_6V@Xrsb?k4G汀)ޕƩ5G%n4xтڕ#mh[;G]~lzw?dCXQߵyy}3*n̟ª^?ꂤHǕ+n'ԽE=-֥tT@ G?mg*@)ZIٍe`^6Ä́4@!&̲" -s CNTV,az)$I*2Vdgek3Ud]Q;jw/^:L4뗣G2+d&*lE-98eWc {9rg4C ,[Ey;ĈѴq M6Onj#vo^ɿ)yEu&M2%?l|]ʩd; eѮqYrHdž]vUO׺G׺5PH V\{A@Y"wbɉ|AfIJRluZnE,B02L8OcD Œ1%`ޔå/E}(CRڞIzָ@+^;6˜f:^;wRJ uD.#vψ(|!Y#zC7hL U "GKBSX"B-mzdR3]E`_U5KY}")֮7v T=z^Ο"494J؄rtIѲ!{y\9kuQ8;l ē{ DZM }IX!r@]Y 6X9jZq:qjH1 \BϛvQS({O~|9ŪvŘ|meoG տ岟YSeR3Ӳr V LhC JI/Xs&Pqe.`ގa6@;G0kTIFi"+ViatK؄zrY+׋|i` }ysBtj]ӓ㽵Zlˮ ]mLkxԅF05Uru$B$0Zk W̎QȬ3cxiLF3Xh5hɲ uvX&:P.JT*Iq˼?Y;>O +[/^1Wd=$lJ<+9ek]F7J*ChAgw5CCO YgwBVK5xQgx܌7L)x9UϔEMAr<8S0%dv nf|✵C5ۼ|2wn ʞMǗ'q\&y:'1u/`= n94^&qH mDwn]jfx1Mj܎=.{Ŏ(xtc f|ӯcS " oW;MipOQ>hMq,~ޭG.%|Nj/Y? :[3y3y$|ݩ̂ zSۼOݔ .InkiIsQ8p~ܼya6 3/"uk\M֘T ڝXݼNHԹCS>9HWrvvY}ER%}UhZ==oßڤݔ뱠ryEqfˍV{wöϞ;uC+"Dh7ZshyxsG4lmӁy,;J8&ʔC^BBVZPmR/CԥJ|ޯ,!ɾ,J[e߀wDlNy* U.eLN1gȤq1+ =T&WI3e_%V{KnDqamU釞(#(e$?' DDCD$Ѯ>ekɠ%e͵W8-l)İQJK^11ltR`};;dP8G1'5F4 ɑ׎56NcºKh#" .'J<8-&2ĭ&.I]k/ÃT7Ad4Ѻ@K|[ [%6և>sbJxUs\n7|,r&u ;k 7C<yLF\Uy6;8{NƵ;T $ԕL٨M1A($N%Z,giD xиeY9jhb2&Ӹ36_{y.3,~\$94RoB2HVW)!hݗK]\ PD5SND򔕌е*(,R0A(*xKΒ!B;Sy'xN{TgGO!Ƞ)N8M+ʫ!#SK2-hyY-D)M JdT7t{3>hV&tB3V3jRI"A+LA>bž L>>8Kʽf듎9iZh6PBa8qa,h;"Azc3%d@~ko:ǞجބJpK0hR88@KYE O/[5 !dW=-+Yha~ vS޼Tz͋牗daC#-JGA(*!0Agx$OR5Bo>ɞi㐶K;{d1C Oi(!ϓNj鄔րA5V(\㴢G s. |Atbz^{G{ }9%MrMj-r et 0 %R-D cˢlcOA15jqUKn}7qrGm'Eb‘&grLdsʅVZ=r@L5QMp\`wspsRyNy6 scȊD*GkFi ~g}zETn+xMrIɩ.SgVU/"#_%}q?k@Lq7)$Ad%Avh*ъl8+q2,i }ycs@͘Kɶ%1sɶ |zqzrɔ,)~8Y&=XKK_gUD{G?-MZ\*]WYvj^6([9*${b$`=,6p)I&/fZpsQ8 \Tlz(Յ昋J:Hi 6R-c<`Q36mm>]y͋;?xbgNxv1KY_.''l૘e}Ţ"eAh(}[v_} !;'뀂VR(ț<A>\<ynrl'ΐO*)Vǒ"S*Y(s3C!}hwԤi1Z6llժfm/mwysCyϨ4v>ICp4TK!%"Ij&0(o],4PpkgMBEτvo^(R7odhU _*ɣEGVZ:w>"J 5D1O877!nKHĦ!3 M62D6Ƭ.$q6$*Z".{[~#hgf]a: -pwV=u1@Ժ)덄ؒ; u\[63j ΝFY:`,y5nl0'6wKwK ʨJr]U6eP%!8J0$L:UCtGNQOP>dzf s^K3EP JH *)K1Eўʜ%rEUm㇙&3 kԵ:{ȶU E^Ch![>~}z[bW':vi [XTR:K1^ ߿[t|'0YWqěH'Ӌ XÓVO?W)=n~?7+TO2ڟN}Q^m@s֋~dVhY(RbY g}0`njpP1-c[Gw.+n>vQgׯw7q Kkw\kG,ٮnPE.3?Nf {7ӏ?DZ7F,zΊt3z@Sm8>Snޠ%EA4AN u񹝛;4éN&]5P% ]kP{Iba ^B#ٍqnC Ƃ-:%H.z!Rz?^l8cKM}֦_ mvҖMwiꝡ(rC:ů&MAeЅU#I!JVsYŵ1IʍI7m]%hߞzUL:(c&*&mj,V;B|H>g)|$l!(J+dA&R5PWu^'U#Bvב`XϚ zoo@zA=9KPy4~b0ZQ EګX]*N³ԉABUrԦCT |y+4/-XCRl1XcVgW`24*׀f! jx8i8J`-}600ha-pI%(JǪSdDe0M njgBrL&$x|?~8TҌH2(c2$Ggp1N@hx]Fv6Ff"ac[Օ Rf{R} Įl9ot1)L.@R.pMֆ sQQza.Ż]uu\DR"MP6O v"d3zY57p?np^]:tBqla p8W"Hӧn0tH [E-r@S `yayσ=ZM\"A٤]Kq8m!K4}#ͱB+f'@XW^K4)WW+j/ώyQޏ&5Ws/4dThJlGRٶҧzP5?Ӑ_5G,Sgj:}`>k>k1;ĦnvKhuQ> ǚM=>"c~%q8eS|УӸR蛈n): vWUZ^2zџG8g>heئ9gw{gZ67MoźRcy0]s`Jͦj6Y@r0h6 8hg:]4PtvF{tC#k67JܼRfOz51{WSSYwJ]} A=iR/ !ͻ_WMjH# _D2]%'{S zs sPA솵:7չlBHK9[#1_Ս?>7SoF?XSC@ݛvܴX`#X-i[ ?D69ĉ)jw92ӾSU0JY.*Q0zse1EGODE7;N?w180R9u8m2'%U\+sm\25ʄ=ys${%Kf <+re )+I;𲠮̜<<>`+P/;C'7eu? nfQ0 |#4*=$5hM Y4W HqPQNi@ d<ز`a[.v~޹vL?<׮Ng&mpu&Lgxgָu(m_Ok ;HpSo93D]y|߽U]6gҵ+]/h 40/$:tkNS@so^vnfpw wiuIts㧓>6Cpg wںRx~Gʵ;֟><>uyy[YHUGV_a8n:[3|w{^ri'G&n8Lik+>DV*_%pf<9?OfFؕp~9v[ޜW% \wдa8hԿ= גW|~ ZF_v=ve}/]̫)Is΂۱qK'*urC00pWy>nݍ}ozSk9]8twA~&_R0TW HXӤeY,gڗO?ZatfZ5l 5^NgkcO/z/50J[ ӿ !R\dЩsԦ (xn^jJ=yM)t<UeiVi02'B—5UIU រFΕ6r_`r ].=G:緋sV k .2xQZ1SF,7Ji"+3_HsUd6wxz|D^;^^>G#,}ĥTQ&₸qAZeqAZЭOEs0IDr#W+x,rw]R^P W҈ FWX i5\!\\Y3Z`Z\d7jOaǷ!8U1p_"??=MfEuko}L)KN5=`r&O7lr1v% =qM~ۄ7rxLMq9.PAH@xm?'cG/N+ {^`;+wݺ@]qSj2t{c&_G) qx~(߾Ѓ "5źda⠒\%%֥6q8R"1'!|)kfɩ=^ ܶ'o,8oYʖ'R7-ɛQ-R JrԢ/H؈h qEVQuBJ{:BbBqm"+`uB\\!\!d\\qITS3T"WHU )%J*u\!j\!]+!9ʹxJJřH8%rFc+1[O qU4juBJǮQj` KWF#WHt )uo]\\!jyBJǮQ,%\.BNJ%lt W9n`&H4vJ݀AdDvc7=^ +}3\&hEKnN3J1 JrԢs-#+F#WL,r/=x\!T\\1XTr%pER ז)\!wq\!-#]@i{:BN#+*\Kd,r R+:+>\!&ZuJz:FRZ*#+(vcҪ[WHٱ{z҆pD9ˣ]!|)Uo]\lT+$ Ld0>apBJ[W(WrMbJ`F }2Fŗ)6?|w;"8ZդBI^uLPmw} }N.#A{5ɧW=a O}4l i9-pе^ qBǰkZك}V~N>|+hi? W"?/➤R /H3ω59eomtL ,n-Mkh-;HgjUF[Y]qC9K6Ne 2UWNB+D=:)[@l$M<oN~k4GƏY4{00x}E8y}2#󫓼FE-k:oW%aÁy_Ol6I/'Jɇ~J>(Ww׳־6k/..8y@'AwEpL'?-XAjGLW{U 7#Gggy*gw̚yL7=<[zX۔͐Qc_~F;{n%w_[}OSl;t;W8\Q~ .G*^wJu2;Yd!y U:ѤR:t)`tQ P%o$k>b:qxI+ѭ8=':N O4>vukBuwgg}|ӥ~ӫO>7p:uG>p/ڟH~GЦi-]u^O6IS KltGzѬ:n?Ęw5azt(9H0Ĺl`tOVTѨ޾Ƿ2"h|t tU:'Yг5TG$źMsӍWG) <41@kYDTK5$뒴7^ A$hP>fZYLBI Mź;f& FQkCbvJPs3Uo6X  q!8'ן9mO k /ϓ/:56kև_Φ).Z]bzIW?yJ7xW^Ld1>.~M$F4CZSWQdž~f2&3lXc+PLfLr>ŵuh n]jlGKv_q[󯭻yvtzfG*(m}ܲBZvƻ;_z٣畖a:loyuÜyf`}~뎎EA_a;me[byEs9e}cwmmKO[Gcm:f l t~U?i{%O[PaP嶞>y87叵yV`*:MSz:q7>U{?Z&৕G-6ESQSXTvAii `eԒW>9c›,GMPqj.di9_c|Vޞu+frOIi$AZr 2I EP^ZŒ(0B#J{,ml# Z&NRu_Vy#uNz):J?=Bi>'>hv3|/%;?"}.hAiTAB9aWˑ\-괃EPeeuAN a`+ʐ9c5ea,& ț G2Yɲ%4)(Cћk`@l$1696hUqSadqʣ(yXH}E,<)ry`dÍۢ;) -lne?&0sk{ \9سa5E>ot|%iKKBN[.;AL,1zA͎b8*ɓ't|M.M`uuez8@DV$QMy'r P8dbR46D5$&0B!P%NZꔂFX`f 4cJ6ĜZ1Hg,!@"M)ޢX - ׇ]PbF8X9KZt5 AZE,g r,&)b(֫S=<{ױXcʁ i'5$S 2E+(M&'Ӽ+jVTz?iS8<xxl|xŻ~~7yw{ԢT5S5#mQCjv&`Qk"\R,m}. $CaM4w$ξe VEcaj͆GQ3 IơXcXLʌi~q23\욬]|}_K.RRR@'1iæ)X}Rؒ]`S;論e^͏{1KNI*YaSYV'!¶HH[@[ǹnNq^11OIǡvP{`$^RWA#C(k(0,ƀU`RcF-EM"&o@4[Ѩ1dLJzLr1Y b=fٍQ_Zf<:Dl&"mq߀"nɒ 8(\vhHN2#P U" :I땫<4eEX J+:PiY*Aɒ#%-ShE g7"~$~Չqq1g@ %@f8$¤jb.bţa38ec<G!dgLH/EtX2Z0BV YSJ9~\~`^Mbn{Qzڦoŏ`@bUC-,Et%%\PHܰ^u-@򈏃UC: _1gY5!0 C:()1 EE`3XLzlyW08eG^9% +0kathʨd".Al8{r`==jz͘nUݭnGWln|w2aPngyMQZ6ru/= yZ28M7OLo.=g@o y:ckdV7vDIB*bZ%|6r-8 Ot=U{MWF7$Dz]0B2"esPj2GJVl[)!֟'>tv_т8T2kP"E-Q=@nX3و/F_vY=]['oQʬO)e ^P/JásL00A9^@? Mc9i/:(f}DW7Yg#Z=io9O{l|4,vL0F03itԖ%䣓}*QYTeؖXG|&XJ(˨S7ѝC$i>\@𳐞a"t0JeLOuR49&Ǔo9n?EK~ Prs0g#ߊ_rnǪħſi2,_o܉_.t-|$VELi]bpuzKOqqӝD6.K.K-ZAd1s@QAD#Bzw}Ÿ.m}j*G3_;.;(Hjkӳ]|o1l?t v[edJ+ Fknm2ۼmJ2e[Xa`QXDd`u LKFVў2ELE;H_U 6 }$qhqUyNbp= Bh|7*jT4"UU%_G TU(O$U9Yjn o3c6ڦ5@Cͼ );z—|.Ͻs}_zs3 u.b\HGl~ATJCrYrLP7qIRZBΐkuN+QdCYowt" J%"Y}UFK1띊oK 7u2t^NT 7-10pz\bP-~,nxbpy10Vqoˉz;(a}%WI"HSIX7%,<KX<{#hy}"7YfZA *mZ9TTʹ1`FBt,zŐS퀸q.ː$ yb4M !=#:j'^TO]k]͗+"Ja6qO˄BsqdP6&uanP>zTL0=Y*Q,0A+dwry <\T EK#-"W@Gkkq9 *{3Q;G-K^ )JP Thy,9cy h_A tLK"?eWOXt`< *bL{RO"JnР 91i)Z1HJ4mW@o)K}){_o3ٮFkk%oZf ؿE?T9\uQR!B~]a$.N~p+ra+2F#o&0pxà/eQ[5E"!R\(*:d 8sVػ$| 4~uD.ڊ;w(j+c ֛<ߖ/n (+pźΊ&QzCQԓR8 jmQ\7Ѵfg_>пTcEv-8|S`ӓ6AEhx{=rVx6\Nk e!Z(R)pR2ZRV=.t0u&r|EpLݞٞ-=/]6y)#Hm4*5suQ:ZSԒRD;4TB%EpØfQ)Z9vvs9#ѵ$P Rlv̀|2e{wy7[^2Wd}8KaRe8hz\&ͤ@r:9c S.!si wX~۔{;N$t[&n:Ę,"(:)8 GFqxKR9LB#S$a C=K1j.rһc: 鬢﯀f9 jwSV Jb Q0 ՁD|1Q1qMɣ;+c_}; C:-:AoYz;uaIL6iU~@6kGϤ!iGa`Gňs 嶿(1'B/հ %-2JI( 'ÄtԞX^_ZV<2kq3dWg?k\wTVM4loo (SZkӋ|._*?bQmE0T "ዊqT\,XJNbiޏQa6} gzhm +[%iݰf5s'\VAmѥ:u(UgR$c=Jr(?-o'pR1Ig@"Z^:ruo4 *ߤT=Lri.D$DW uX[YydJZtXvVtΜq._>h.˧+]-<ؗ@$xɩ2B +bmdehl:E=vBJ|j*?lKJ+Svblo{yEo@gy1U3Z*ZÎ 4B@*s)1ٕ&9Ű .g/]eh:vvgWo]f%î2LvUR=6"2yIUXî\J॰ -?z*CgWo]QfK z.oŪA岉=+~ZU~zic%pDI~Fx$s_b2!N,)-*PN8hA*On }s=V %S; n75.( Ԏj sWGà%s '8G3[̒Aœq*j H^HS u}aatIFw6LrBq+~)ey_L/WWW2c/a_uQ3;FtȌ8=s~2E2SS: d tNȤ"O'˝$Rr])ɍC*sTM3!ɲ;!`,,ے{Op y% h c= Jvd'KGq|Iˣ*:ciӄA/a^1r];qތE/3֨O?ockUJY*@ruA\ a9O#FR,=-0AT䯃EWog;83i7k/n^!H+t־\~n=*g:gE"*NjSFP06H=ZYQ@@FY{lK ' Ck>ױMyk^r$K*tyGC H-LBTVUʼnqEIS~M55h'|0E~2͡o͏EE(?g XbhRi[hLn 4p%rZ"0"hbKǵ!8mNP4-sNl (М:%Ze?ównl*EJJ6oEoVP]Bc1"2 wa*D N@mrpkSEb+M"_^ތrXt .$q$-K2"TrZd $Gt!K@J"11&ElF3I wjt /<`wV@؂YO -KA#Bs!q نLi%\0baQr˒JrRI pJzĔPB,DFaˮd##dڗvr%`=e^r\i#Ÿf0U`Y,`L&_A=#"'_6)ٔ,7:dշn{NUG^%cNօMx}1>5 zU^܍S5ԺV"ΉTRhmu ; ףI Y`CoaxX=E);IM -ڒBBGZGWH uzNh^D6ZZ壝)k,tRO.E#OAԡQ ׄOQdO])iDw"5V&Y)dWH! QS׆RDȗWh|Z`rJc ItނDcb {' m.0vu`;8t~R7@&ܪ08JUFg-K75ѷѕs%n`Ԩz$i}!s4_>TUP#{*IUN0¤T)cƢ\vVImB7JFˌ  hBsFsw4vm~+u zZD[@B]Q[xeY r3xWh$Bd BXG  rJc5BȂA m3mE"3| ޚxƢ"I; }G.)bDJp`Rp 3:ŸE9 i3XGfұK \V:)I91po&Y#m:@EVLAG= ]I4Z*#``!&SAyCs ~XA̤ƂBgsU:I# IV 2*e^i C.k{%@/Q h6+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@b%+X J V@/G zmKz3{-5׷^{j~V7ן A<|&s.!#:ˆg/\αp%~JH)\` WdQZY#01pD +a&\hY}"+ t0f˟{qXm_%."ٗNޥ|xr!廋]~¯0V?Snnl*cc'5t?7¹(6Z(&6VKl&k?$+cQc W0X@K"k~ኬ^br$>:l8pEx,ኬ5ṇ+)W/0\yi8:2ؘ WdZ{"+pUtф@8c Wdzኬ4 WPB6;rmݛz?֊Y) D"@;}4G~zXR"a{$Z?-r,?ᱹl%7jhW N>\)>]aOC_z|׾mWDH@'!\  =N۠"5B } [Jvd ^''WߒܽFs?gꌚ?S寽Ηό X[m_~X{y?>řRc G>]]箑6dE@_^?V$u[ٷZn<ŒRFgjA9=`#Y!,}y3Wc'@|}⟭>|HĞ' v}[Z97$S~-WʺNvJI!s" ɇҋ(HNiSTVi%7J۴ݝþwqoy? j3g..Q9 @;E UDmZ&ْ/UPliod&=˜YmOQ߬ jƏ"ĩ4ٷ7D $BެNVW6AD:k?tr\юP|Oo.J:تeEM}LBT xv$+LjmU^ڛ٦Zc1_-}TTh5/@*+ahx3;K1̚  `+X<</nMu/mZ~H;Tk ۷.A5"Ug˛{>k[8I",ލT=_k;fvاV s~@}]\wȰbwМ|yV/(ŏ|N.䑷3y#ozy ݷhz b Kp:aW*B[n$Dr|Slwplnozfx};iʯpΏg_G?*wS/&Cڸ$}ÔQ9k*+[s){attKx$9rw~#(&<La Y19adO4!_wgYh;7yEҦRm|N/j}eTmgd'܂r8JTj%Uk-L(Zfl9s5eS}2d]|YWG,Z\Aˏ^Ct[8pL41dcٸұ,0Q:,Fd%M6ػ8rW<%zKU@$N-`bit4 Oq4%[3)e7vm Nw5b]@!cv ӕ0q GSN TD9W8Umqbֶ\hbXZԥ\MMo 4/)֫˜zcs1f"Ĉ}G*TPHFcZKɽfP;ЕN"H.k>_/SG," Z܁dro^ t:Քwӳp<X0$uOLMj{{[yȍO6>iCə=vըz>EVEhQ7$9cZ6?4CN-='e)-u7])[ٙljuq0fę܁i#|i+bQ+wl2@v+ r\7>~O1{9S>]`}YJ<z77)xw1Y$-4ҪqI~&\g<BNXkz5S-;_IIx\y(C|@ggDxK#;˓!~N:l?}3t73):s+){RWc7wiOjxU*AM8fj+M2Oj} >DuVSH9064mhB8V62'D/qjQvj"9JղIC#F˳~=mg/V4]1yŲeY,'0>wlj\sQ3kQs/57~n24S^t]z~V> ZO1)EgsX /g<\ECvBp(7NƵ}5̺N&Ś ɉE8')+Ȟ)QyyAqNN>#c:o?!NfpRdtFK3|NI0:Tjc3j/[:I5qן?\`D`"`E&ÊZ\Պ_VܚɝflBZU1Υb/b=P\,\4՝& ؇V$C1RtҼu =5S:HAZ>Y 58L=r#qaX8le4 q';5juf~?ܳ*oyS:==pz[PJI1XZ@PSX8fl2z!b9/o%@{8N#Mm;Cp-f2QwM rA.rzE'bԒk lʣ+ g6_RdA@j{{J߯={闻jkH.QoA:{yO7xՄr=ՙX.QJ1/dzAOp ;4Ι|&6Kxr]`0~Hhڰt'4uv :uTu{tO^{ucl:: IGo?]W8_,EDa0"lT_ިCQ1[VV kg 1~=ӽC݌ޓw,>K9[=-iuo7=}=;1~?Zo ŸsnCߎ(zzF/͛przzS1|::oG=_["m'>-e\_!?/T^]2oG̤ۗ"G=<Ѷ\w;ϑ_Y~0y#)[;:֋oBb|- NKiuGk3r{(ߔpOsMwWPxޝ\X4ژowH13o~:l5y'Y;;t6XMM ڌzxy?b:짼+2n6;ntconģX-# 1kxelffTBjƳO_h &UDo'ȗ&%/dJ騈zj]mH^+cyEZa@3Jk4g3}}ť&HRXGf6>r:/7 ʹW.fSg86ΤX^8{z_J=-kҒLY ς^Ww n )y. K$@U{v}gϢKˡf6͎T Qy Z8 Z5Qk>DYJ$ƆrKV ʱYq> Bl(&H3DF%>k\i`ŝ?.nڭĸn%t _rAoby>yuiTn iK`֤Y3UW՞1tqcS>z_&Mb 4Bd̉,Ύ,ςȚLxElr-#tٗzUpj"خhKM9Z:Rw7]sO/Sk,rIZ &jR,==SIX+JeT*5:} 7P)@d&*|nYl]aaLtP&dDuـVPsJ@B89;T;a>f-63ګ3:zpƾpؚ8ȈޣZ<9V Tl@'R'$_SطRj2!Ա5 HSPI$T6 d]J&C C[٣bȦ-[L_NJ%o zfyyqWUwnra̖F }~yB{1@jpUW)N8TELEU\)ep㼟[sgXt/zd*1>9X7Rks`1ۦ}:.%0*h2MyX]5plTϳkPZQk=Xφ@nw,^;^I Ֆ.ivREr#%=5[/F|Q6ʢ#U7@N?&d$71z?00XR(mpTPaT+_.vb'}RyS\v%E7`&6%.[(TbSm!14ak4۞iT}~MC 5ՌNZ0fPM&J\,%[4)'. nyѶqeylo9T0=C=tb(Ȥ^:ncu B^5Xcz|"cӜz[!j V}ct{g2|}ɾk"CQn`g 6lʣ\}_P䬘8Y$36IFxY %e|O5f.(sI'kBiP s)Q*/͉eoilW hS9S霜S:>~k"ԸkՒ.9fJh1 ,Tq1D"Y&G9cU+ 'dJ@CR吱 2L* OgyQY)K4T="%ICUz\W!88fQ\C>γ4>+CgCQg-Ff[ƆGJVC)9cJ5 &.?}qE 3z'-|6:l{fq=c~`Ok{oG_S_w#G_!;w v"E8ed=UÒWeEyxD, ; ?}M7h͏ɟ_)~bu?=U/?D`y/q-O~,WMj?)+d Rц51R+,b_Lc{oB]HdC &u3tvjxmSs<}ɋv/-l_ܵ<נqQwi̎4>9o09Lǣ|x[M Uxmq|T6V_.D_|B\W3o вKshuZ )ZF%Zv.Ul}T9jKkLic_RH&5:S)6.jQOR!(DaȘXWB0FsCZ2ɐc.3PDFr)f6e,V'|e5qy 5l#DrG#N'>qᠦ:GF^}G횩àToo/4ia A(AvZ_m.̺"h!ߴş+ʇU?Quyr=oKnp{;Xic˨ʼ!VLk{KϞ驪iG$ia˝o1B-EاuZ2+tNSSgTJ}ʠ41@rQR fזi[Eތ);z*Ǜ7ޤ ,hH#\ $e,yԔR&|(*^fX1qLi9/!beyI8ߔ\>M7tQdQPLPlVfT;iĀ>A@/ٔ(OV&i8 ˧<]W njWX슧37޺FiMJg~4_S+Y ]hrՉg%iXjH :m3+sHUVѣ( j gG+ got@s!$Ν>DpfʉF9nRP  i9TLW$ݴ*r!C02L9bk~!l2H&Ԕ˝o}6ڻ˻Qocb>Ť8p|D"ߖ*>.QˬJ턗5juVM+>ty@"e_avu֩rc.1oB^Ȋd 1 : okMPxð\yоXb&Jcՙ @qjKFcRrL`o_|U:5i)9RcQǟxBdMh~ANx\@ݣ?h׿^غ!|[E(7+#@)80]d Qzvl =cD]gt@ŒŕNߨy/46S%Q  QF Kf]J|5Lg[~q]jѷa,*4{:3Q.t>܍ ,z_R8FQ@A4 uÎ<~ǂeIWQetA_Z}@A;sYyc%$mnVLQWNՆD_^*^RDb&^:L߯#WZl0B+45H ='VZ[Χ9&EΖ⍊h0 2Cds p.>s4h>>L ôKeyhJ&X8ϓK`Ѷ!8L.yq?*YUAJ4'LdJWAZFꠂ W:yQN>&f'Pn⨩pэ1 Xqsr =^NAwd\l/-eIZdj "+qչxf-+uݏ~ZZג"(m-u7Sv*Zעo,VzlzzժVfY`UvGrNPвHrnw ޳Q Y)I KhϚ܁KTk/, dNQ%[n_ܚ8/FLD^ۋp#ܭ6#xg#Nn7iri4'YO>.uNo)o((}:d![ɺh'2fūDvdSrP)~2yg `^E^9{z_>~=FH+,vC~۷HE|`7(f7i ŐzˏOmtYF쬅.I3ؒp^/E z~ǶO[?\yj%K^."μ* got@s!$Ν>Dp搭F9ZB)B(ĊR\@wRYZ 0Q戭s?GDWh'PsS3/wvfǏ@#`qY+c/μ9&%8u7^j} 7OT)=)3QL-SLt%Eg|1/~k#ѹVHFsEb"@%rIMatSi&{8T&=I (* /A8Dyr!i`H2HQ$NΗP.yt aF"s`CʾdH{;ͽ[nnb:2}aeښ8r|Oawghkڀ-c;{' nq^J\d 2c%,(ʔ$ ZHE8 + 4,Xẕ0Ӹ݁Ӭ>}mNW}Qo}g]-8;6 L8+\WIZ}\Hʥ}+JjILj!4ΙjUPOZG^! 01}&fnS h)j!㢫0ޗ3q`8c(ÃEd%Vkoٶ.3L)!Ǥ=5&*A[2mp9 Kng .&2aZ2oh_NK޸JmJN8ȫW^*Q 7l<OCc*7D (..M *+c}%NWa20VkqRBGS`mm/o8qϲU ϥI9LGl SR\gf ZH r.v P3Xi׆"$c8#/Jf٠Y0-ϳr5 f^ܥx)6Sx<_PGHX&1*@ Y3_]{o"˕*g{a)dEʮDv5'fG4݀=4vc^PU;΃ ))ڸ= tw? ׂ9}jF}(Dϼ ::E4T$ZAvRo'h{wE跴Ѫ @LPamT(wF%<5: BʽDp,vFQ]4M\3h"}CŒikuqd;<7qwӡ2P?QB0$L>D&q&-^$,̉CmyD`mTb <0_W$Z3_WrHq*KmV (53E}b6EdСȒ ^!ݳGce΅;:r;̯ z@xGRCHc H`\u' 6*h&U4䢾(pè'ڐRXK4se=1El}G EᗄK(+Ec62YGY$ c kp4sFuOǻr&.%k݆AxgI  *Gmi<6v|rLZKsrpZ8^P )͚D !D fJyϒąBzch`$ Q,7 L4sL&d\u - R=+rV̢'-\fFu@R1x s|S<OUy=2O^ü#Ӝ]fTFzjPd4;2ybNc\)1ूwFL)I ȹ]}|J_ssx9 iֳ^t>xB)ՅR2wQmI; %S%,RkNr}ӂ7-%Y%'XmPZ )43Z<{b;5z8@Xb >Io[1fQJO5@ tAS􃔴ܪZ*΅.M{5U@ɾwayW|Mv۵[cN[ FGq8b 5@[*!Dr>wl/3Kj)` dpIoIU#Quq;tWMŧ-IMG=mfֱb#?AFcmBY_*?}N[Ac%}*0Nb P=ރ\O!Wr>Wo/k wB#Tض?WSw&ޏqb jezP`-5NU=:RSqH0Tj%(OR$K˝(1mYб_쒋3u-TR1;/S "sBʀ)9AҋlzAEn-M2i;)4{85z\5z*B/jAI_e([zmd>J3[ FjvZZ@VyĽ"&霩G#,Jr ϱ|cf+ћ2::bm tM,/.QmG{w299ysݗ-؜˖kO\EV)y5ו/h+溯} aô,E: ynm&%ܗhn%#ֺBe>&>I\.=.m!>Ija:?Xjᶲ *qus;כepJsuT2WRjQT#yHQkSy>5_wc)TN)P'&@֙*.g9qɸ NtVRR23C7Ԧ༷TDY&J9RS؏8ΖA&J@>p8IhNcQsэBagmvnc7\EC23!dGtUFt6~E>yh:6il .+z;]LSgAV su1\,fzhoZ}O .UQKtf:)6Z,E>*oRvP=S(%t+#Nv˕)`Yk on+H2o1jVQ-q ƥEI9[PbEsf.}re0tpL ^ml,8rPE5NFbUgoڪN频g}馥ptrVl)YNG4-;t{Val̡Yjv࿐bk/K]G,t=f{OF0iw3o??y}s[#|:3_{͞yȇoLRtnv'`M8잶)_B0mcFşq-{&wyژ<p-k4x7`_by)BԥsԖ)=獛cq*Of:|JsB9O=X>B 9?ML Eʜtn3:xυrTa"Z2dXnk">E)xPL삎wKNgNA70Y9dh{D&}nYqB-:0`Q۠J(0t3RG _i/ / lX~H[CADG.cXٜhĥ^[^'STdII bYvь/&WΩיy-ko,gfٗVlmΡ“zұO"PTD QQ!ߐ3mXk{n3FPJxd"(i `x?Ld9*i-*VJJ +!;61G &!nuqә`ZdBoJ`{b0RLsr$NT{0*S e' crj{ [gHa` (;`!A+ˆ ljrײ|:,xX0HBXU>FBǧ`T0;Z$bO7g:_^^^hR* W}V„ Y]6{aU P.]Nwu5H:nk_߻U0 gaV=Ӫ Jqj6M&i\ anxaB1o|Tiܫ3'墘<nK=Nd- O\U/OH2%?ߊˢLvXid>j|=U0F\.~uxi7?E<߶/t?qUR~K܏dtY>jr;\ǣ;N?NGw4mK8@r@8$z2czF_Ȥ.f=XŴӭ eˈEt2'hNgS_cJO\Қč) g#B_vI`,v, &HxVLmvNwTխSVsIK)Q~w&]@|y@ )%Ho {cV*-Ac; WY&[JxOzEH/KOU = S+D8nsUS7=1rezMH7ӫzl>hL` $VW"@@;%pmɝ'ɋ 1\Pό#Q z/!/ qh g0tYW,3jH]Ou42QǸȞ,<҂F0䩰[ 65/ɴgӻhMm=f%W'mAb|t}{JcfħUO _^ '2.*W֨P(ZMz5JVh]h)li2^Ithj)u9$1JKLF'8MQӓdJ%j7T6 ՂBi jA Nd9T>A /L+=a1ؔC3bjʽU-sMsb!TQ{L"S(F1F$'ԊI͂X8Hk~kgG6VӶ3Avyů31=Aߐt!@΃dj1?by}N~|H7xx7: VGv9ak }{ZFq.*5\.ELW.zz #8˝w.f;_7B/E/yE[s#[5C>[ ׬" -.57,.u5x;Tu'7j~ۇZ׷sx8YԴ߼n0sׁӳ⽹eYxQ9/F?X!ԛi>TשP+pbDqмK\cI_'Qm;ެ8z_>:.fXoxN҈aґkϭ4h0`D$։4s=*lѝK.e 빱hT%!J*ʄ@gS+>1TlQRJ;P{BmJI6|O+cTaet͵Ջ`oRcE]~y%*CT(2;PЫAG(Z^})E)*XY` u 0xL /?r#_wrugrjcmǵe}s͋l]7~rE{4N 39AKmuuK0:Jq:FF14pv2!0:%:DFE 0R" HG4ǿ e'b\pu"80'j18',s5F4sJP[{T,R197ђ,hi tNƹtѵ9|ϟqLNkyؕܘ#~Sj5)>ߕӢE]se'$" &sJ+1T~,l@;F >hSV)"C!Dbch}X[J؄$U"6R Iƾ\[t/xOf&Sf9ai&;@eooGoo^>? 3SNȹB۔\ **JU;hJixʹ2娑-!5Oݐ=NqiK%8 A1TI L#xKh&f[Xtڦ0k{ 丹n/CB<%"z]Qa!Nzk|ܠ}x`đ|T;-(!= &DQbKTu!@<Qglڨ_$ ؚɥcOF/ƈ\9ID VF'd#QcRscy -hJ̹pjT,e\ã%@q(NQKp)g3#~|qdQM{Ťd_^yv8%Jph:D @( h>% "R$ARţb[tˇ0T߬VaFn 1x k7f'DޏH.;%ֲ3P2d@ɁLT=JFL 3pÅ+, t2\BW-mDOWgHW`]ev +t\/wt%ubݡ Q}tB*=]#])BW+L_` W@W*vBΐ4au)xlXgJNHg2ZNWec |aH tg*}Qc h5m;]e3+ a[=LonsR`b~_zx"0 Q3o"0np4ɫ5.ռ$xtqH _p0Cc|/@o([%23_&o橲3K:%`~O}%wJ, 8 Q*5"bA DbOFq`|i_Via^^$B{d}4KN2TsW5#<PlGcF.GY9CF) <1 OΜb64 ;ϵݪv ^_ iv~qtb0 J+&৿`< 8(߈cW`*c\I햗hO` ' mvCOTn7 c%S]eRu2\]֐0IOWgHWrDWX7'NW.UFx Q.z:CT ;u2Zz( aw2`crʻBW-mЕ` f'UKiW*ej(9tut%0ҥ >+th9i1Q~J1Uc ft&׮2Jz:C,]ev2\IBWj=]e׮Αr SG۟E_8 O`-g-w?ȍ{ZkvRV㟹,_fJ8 y`A̜ X d>kQ~ԖC. ̜v٩=vOZq"PJ.R;Еjߡ\b+,NB|7u2Z&NW%=]!]19 Xw2\ 5m9Jy XUIW*5t(鵫s+nKtBMW*v( Jd-(`~LNUF{hPtut%3Y*gpEgVDΐS'UGV(e]#]< 1#~tw7> %57G0/ލ{t9S٥/?J E WIhXU ޒSV|1OÛ[+cg\?xn4黪 c_?_$v6V2O+Q02Dŕw^mWC6Vzp|i0( ⋅oWg˞颯uמ۬\ .üojt09yۛt0J(8o^:9d:5H,)^ !ߌnߦusf=Hj0rtA.^ghAMndv( -]!`İt#X܆.kq*`[]VgVE+?>S C\O\b"ZUۭ܌Rh2G22T{v{;daa2jT՚o<1VjW|Yj~qGiGijw՚o57-+L6vkb>ù7 $z$-u _: Pe-6!V:mb&YnvDjYWx4V^6/6ԜIC^ 5u~] Eؙ6>Ԡ2r?v-`vfbYw-"] ԴxrX{Ik4/ytr:ԋo1hа'J-mr}}:[ZF1/a/}WwM]-SLW6y)/ѩO{ekWO7Gepyt+FH?GٵkbZd]]sȱ+,\;w!Wχ\o))[( +2FslW"AtϙAwLEqNq658gPnփĎzY h?M&׽߾xGyqƦ&L yAi}Ɋ$| \:\ˈғ( bIO""2 ~;ILѕ`}\3]/~(m` d]DWU=V O?ZBW։S V."Bh  ]ZP46ҕ4L22"BWVOW2|vBWʀ `M)p'$օNW3 ZDDW  ]\M0HhItutӓAx<[WX J] ]ѳhB4 5?$.=O)V2:oIE^ a^ dgX3Y:VFk J@s|qu\tF-qˬ^?ȩv53q>Z^0E \]Z8rP}|gc'ghArv3r3p[3L ON#,ƔϋwoÇ7 PLlLj)oFx덨AɝPN=nF+ڑ|k/ڍZ9;[ X6-D@l\]Mƍ7h? Av~Tt2|?nл\xh56/}h=,/ t?{>|S:' s#cUU)Lt@̹g;0B@ #CR-! !xLFwF\@v\7$/͋l]n\%T:@r'JF ]}@*l>X*HyUR-"oV[[`ٮ\gǴMddS}q.H 5aUJ=I꽚4a8I` o6k6fr;Z۝k Wdף|\eFhT%΃ǍJ^mp8:j]aT:lrNKAW-@wQݏ[*PrzO}bؔ=KV`ʊG-dJrWnQ0Ȃy+_'7yrwvަroIUikk:mM# ZE -֗l-7_V'Hk}5uovUIOГnrzJuI)_z%%5dk JHVQQrⷣ+똳ʋqܶ"C]%+cP#5ԃͱ3X {kLqL^L$[--872i0fvF?_즚 Yںc76ZbWoKM(%laxOnZ+y#Ք7\8bŕ;mo`O`U;\^.ٳѧOv-EZz|3ekY}7fE-q e&ejզ|32VljQV/=g gNj ۙ!qm[ͪzTjGn)%k<ߒ,gW6gr6~iR}< X GX^c  `RRKI>rK(Rm8:8 ΰW|zY'olE I={={vQв9/V2[j4Ys/BςoPР[#Ԝ2a'1nΘ^ۜKvK f޺l\εs_7HM9'칏>;'1媚ֿ--!; !qRA_O {AwqQ{ǷN[kjܑbwq:3Z "y9N+rjhѢt@`JN KUj B *K-+[!E.|P+ri-o`h(ii'b9pFvj,$*ʳZnGrh'%s`(UUL:)I <| NF4>` X<8$erg"./%gҬ)gH\jϪ;n|H>A.Ь@"^Mnq^A5BTܲj;|dh)-3fP/)+Wmb^ zJ}כKI?9/è~ߗޛi>?ŧw!_d8)6C |E;]mm#w/IB>J[ͤ+tW6"B+~_yoz]MG3ͩ,K.L8a\::Fbsƅx7hKnlGbs lG;%.EaKAzFX|V/Sx>QzθH&0iыsӲMZ1R2?Dn+b ) V$:P> 'L7)Vr:e,FhJ*i6:wΏ,XgOM@!X訚Jh#Y;VB7Hi)aVfKvvsك5{j@[|ţ;h G=,@OnF=tVPsLފ /f5⨦UHu:N_SR^{cGBuh}x}" ѫ圌j{VC(]0`q]H ޫ&PRGV[h.2Zl{ɶkiݨu-c]+Ѹ^ѱʰFoeXv͎fK18u~Vu3 /dX\KL;Y~C<>`BDȩcB.t_W#./)iME&MU]; gXxHҚڱM5a"7?A~Z KGc [#:~Lbt!uƝҬUit,AUy>kTњcZGcjeE^ձVȊ/j? 1Zhbڎ%`ܦV 1[^JkF#RK>k27Dg̨<v,jr0,NG SX<ff#F:V[=rs^gtXyHҳ}֎=밷#;4XBF8)0f |LVQz^+@szY/F糪֩;R=s49KT_.Ǘ 4|:#8!|~q?ͧ1B?>6?WFβ5a"8wZBz'~wD.# P.\ eQT( W?t<.Į- =!LaU<[ V3Zr Ż0!xo7: jGF13B1xlDPVsj79I/^l3G|6<>m@%RR3U\4 *TL^MQrS6WJ;kK3``61u+3QoΧ8ZմV#>}7U[Z,u[\ݻ;hEbtCdn|YsbdA4 BgcS\Zߩٻq$W 3yU 4e{АH)/_i;Cli*)** uXN1 ~?ܾ%͠"_H?~~LY;.?;T<ݡ*Gt^'6mNNno d{JP/Wf۝gO~AmmdڿsJK\jNdmngo&r+".GU-Ų;=P!Tbo 1v_/'zYY`0ҟXIlʘ_7Df|q//K^,WÌ6I0N2ĨSV Yr d 6iЮ3B5㋓8Әl jf0]#21f*i$ePb}`Ө!Q1kq2 ("]:2 ڬDl)bJY%z$I)Ԧ8st|^K.mbx:?}O_yK+ ms:AbH{+dk6Ld^4KbsO5}1FHO,;.$vr8f6]YX4] ɆSio6L0JyUyeȹTS)]3YBn\j({f#͘r0X\{M_ FB&cJpFed ;/1rx}f~ٟ"=7P"8r7te34&1f@r(࢙.vz?-r7[hph hL7usYJIsFJcF|i{S?];IbIn݋T!ILaa:k80@JpD&=l`` 9?˳`>f,$e&v13)R:;uyO On"Y0M:DhHl,oYWKD|ճ@h66QHU?ZR \gOT⸭Q;Uԧq+V[i¬${"S&:Sdu %bpeUDyN]h U2r{S4vE7+7)*Qzy2fswGM qBTw^}+ʘ~rK!LSݨO3YN.#z%RDi UU?ѩzJHVj5?鿹;0?aC4R^E`ōѱm&{7SY-0`8<˺IY˄굚6b )_[a3$YX@`$tNF#!32<śIy kܥi2s/?BJ\s 8lF[ݒ@n}EӫʿKąP9\ CJyx=vF FZr;[=OU?H:Źu M6k%7}hI#Z~ jaW `Pc Y{q&AT3;z,2:N ]ŒA\-tU Gȱ8ox*E,O'6onfw_H"֚h%'O} I.Rϴ sSs0a-QY[SW:9:9(8ʢw/=VMJx Hԡa+S.2R*rP~޾DqNjHU/ ¸DwNu 5Mc!VpP=Y OBn'noeHF DD^}j}v,}YOA,9ןd.σxRdj%ӢRcnn>qZkd"CZEkg>T.?ضKsuϮH9%'{v0ЎXa#3 D0)Ѐ 5h\!t7E(MnC8ީ6Bw[ kΘ7t1_7K 5*H@ V{VA[.i&"9d\wV2Vu#siZ- B`Vq=$:jN`J}>|Oկܚ8dt2j>}V0&E[)vn.Uwx*Ӡzm_Yx^j_r4"'J EOR8w/-vHySB#|[~Mҡ_!xκju/XTNuVJЉJOіaeXL}w$]{1 C ҅yǞTU1[7_f>g /y,f_63u?\'l{%D96y>`:֥و׼p-PzzL^?Iޱ|},ycpW[A?. vc8V2z}Yf4ڑVy;b&n:;+Cрnnfa*Z) )㤾1Ήo [S.Bu 9 A=BVL{ջ駺JIVѸSN@"HEݬ&! N;% b4RNsBդr4=\Tdf4YnB5n^aAI>>Yej$9iLDq 2L _wڴ!F;۷ĈĸUt/vdU|C_C)Z45քܝm&PfiENY9U~͛j^DԹ0-opǃpX;A0Budip&5$\=; hn$z36 ?6NoW~墸 Tzm[$zK--VWͤ[7q51{)fo jЌ>ei,8`[/&ɮ*^j-ݞ$z1?!mlz'˹͍􆇉_B `aߎs,Z#ڲЕ罾3$_Jz6YLg<+rId^`| @3mW*H4zCSp֔hhIU҆Kcȝ< y*zO}I)3sɀ Fn-W[@nd=[k<S/d!N|i^ޚUX&~/)Y 7I%Rj.pw'ؖ؝]\Þ߅ c/W M F&M'(9CZҲ-CoF@E>þXy NU]k7ij)Fr4E6cθ@f܊9zss*Gһ}qݾ8a"]֙?"wə(d& bpKN[ٜgZEOQt22"VFpi=Vb"@DM) 8 ʘ4:;=yV^\^Fa/^7Xs4T"q75H:!9VIr !8q"Cakev\`5tUnm?*ă0{e3X? y +Eg^O@*Ci"Kr;5Y:) hE~>oS|Q")Gܝir7J%H"D[ `I!J& #m%$4}L8wrQϘ/k 1ҡu)jl}~+z#{Qm%"Ud5y9oU3l}cC ˾v(o5Z$?Qyw-h'r;yENq$Ir< zmg@<- .uY]ǯߤlٴDJA%(6:Ffd_|q+ͻL|Q0ˣΌ]Z)g{( sB ]v xͱh-ۓݬgPz1NcSW q|Fj`"jl76A|t}0qQ3- 8z{ƾB}"S-)LHi,^!Ɖ ϋZ^SR3NQYČYD{ʵU{30[r"47 ,.e4%(QR3 +2We'e :wVTx#U#<) 1TUaIT6| ?BAQٷʈo#bI[/@yv \m .@3A 22A 4qrg9ޣ,"JP$rO8DGÚx KR~A(f:qhSֹgŔ2Y^re .dSa5sTZ9:Wqo9:1@jJ/UClmsY#MzQn*/d횛Ma4F_0TfXj,"}C0"lq )J?Ϳwgwt,i7OY̩}8 NH] Q# =u@ןDQ}UwAPQ`sa߄;t@Td.uTf' 4~cpInr1DIw /Y\Lm&* p0IYq|T!ȵ:@IKQ27jRܾ钸@P}ʝh:ZݱW~ D2;x7<ɗJ#3J8TXZ\6~I:-Y\)4˸|0Ov^lOru=Źw`ўS -@Fj2(`W=kC[30x`ZT&>;&ۖH^juHt#R>V'lb´Mb>˙ƪUBA^ժz Vjٶ2mZ}6lZt$ ;ڙ8dO97DBL-sufȧYvwrWݪh 3Znr/%{ϊE)$v.w__l䮋z?5Xjg6yzmJ*mo*PgNx[ ٌ%v4>Ev`J &܉>}#Ch:۔$U\&νc(P^[\v(n5jp'/6O#VcWR(cbV>"Li؝ wRK^y]ɵiѿ*_wk<̧˙Loe} ܿ>XJZ@k/C*٦M!h6-+/o;'L(b~$/6^%Ō1ϛt[whƋ[X8nֻ8s?>ӯwr.fُoqe|Ύ憛V-^E?{nCt6YߘރR*RZP#(qawOSZXhΕ-۾ a0L b8#9aWH ^?3  ]$*Og/>[E!s).J!hZM~'IS m:ūl$/rt2VK+lw[fY=O0KtY,wj<`),}6֍z)l5WE>)8܂IKڄmFOKM i ^[j/g,"2)⣯'ddN륏2є9kwM{6n_*87}0wYNڐ(ݏNGX*:@~rNOvL;8KX =@a?чZPrW24w]P)W,]UFjvVHc^'$8;=9S@e2ΠR&6wFo.ˑ$Hl鉽G-G\,!"U77A5tl&rt(1d1_[ A$MJ$rr%dWNI:i~z|'_ I'+|$n+;({MʮҧNO fCn2Ώ\l?t.\ yQWm9 ƲS@޶ȗvyIFEEq $5 •?2YǛ f7XxZk*/ _$&2 Q[W1!vU0Wyѯc,?"p`1%!F9F_yieQ8RYX֕.-mtEq]EBJޑ t X0(SVEel lf J88:Hl7f:y|<.GgaA=b:I&MB )끚GCQIʐ+tPVXY(Nա4z0/Nc K)G4@PûP̋G5ovlB3U tdɾ`Q8-aӮѓϧ&b9n6a{Aj^DpqX1^vry#%[{Nii[K48m" X! lWX>*reCi|C%LphmN'rT&%.sRvl74E(1Pޒvŋn6M`iWAuvM`Rgq >1ؗɀiW LrFha,v9mo;8[OQ&J("['b~/ɨ>!IP H D,ĩ߷|P!H#k4- sDMv`̔ Q3@%v/ tC| 9)"ZW3䩶m'O"s`h(;y $$&kկ~`Ͽ,hü>6uVNbojbFR:Cf(u"b͈tHm3#AMq_֦66uJQìh'$ϒ_4{6𑯤+}fQZfaUIBYշlxIu$ǩm6$}qPELN͗/I&Lo CPҎG=v>WZ CNe>| LG0j7`"?%yrF 2rַmgS|1Z}y-wB|ƱՠvnS d@cr\0F5L^:2v>)$@zƮh ?5m[Q(Зw8pCFp؇ Xj¸#arsYpqeFYm{y,m!,iU@i"T\9{K␭5GdGuPs"P9.%,ۺv3,FmŎ^g,ngA+qa^sywCV ǹbB_|I;y҂c3RpIR3C1j6GUU\,Tl~-a.C ?XʛlNͅi݇9JHPWS=2+d 5M4 # ۩>(F/ɼQZZOEW 9uz) ,s<,yP?wDɷlv|F;l5?]ZqFwyrO *(g!=)-D-xEMS?Ǡj.7BN>m-5YmkJB+T<|S(n^ykfE{F*F04:-ٺZ2]e@R BoF c\*j;aYHjHkL;x v[x@)pStiK9\M;`E#$v>*.d.o4#i</w+k;#/E$*Žޝ.(Ŭ-pyG-c=v(t'%hGnJJaYcFzWEnc꫖ـJ#3wHPyD" b ĒJ #R2VijA8!̴Րѫ8_vJ͠WrixlRTvz8O;F( 9b35uiI-%V=ǹz>r1 04*p/٪dl𺃠R`DĖqgi{PwbSr9K_GNx^e:=(IT=u҈o}8cdPJ|lz>opf!$;Ǚ};-/͸mTy#6N>'%"X^xV "#7zU mD(giDa/:p~h{G rŗx)}xXt{?}]_ݨۛ`-J  `K1YWTyDК 2J i#9Yf"R8N~׫+|~&%>+k&PL%o(` ת' t㷝?8)|D:3'/p?鼋T7m;:'jɈ|L6 rVʈ+!Hފ<_ӫK(j T*ސg)"+ݾ1MD鬲ד`FrpqIOYex,I|NۥYR1s:ͲrS*^ٕy4Kg2oJVL!+ Z,'+Ѧ.5oYѦY|tBfD7 .[iP C %< 0Z2Ku^\/,'֍23K)bE`,3%VJD1`ȕBC ohedxؒQw-a:^F'42@9^vWe;m@\yMWv)1ٵ|?^. I[ZV+/Rr A&ç7i0}}g-f*ϛ+ǝ o\Nm~n8)ʋN!^B+|hC & -ZI)PiWW & 4U0c58°3A2 .jO١$~7 9mzI13X6,U?x_Þ TdNhF x;*M0omTwoˢ~k{DT⬨Ts3S[)I\lΈ!UTR|n7x'o8 Ԝ 6ߏ(t㥱C@rf laQV9 Bim.+6m4S \8Ҽ*ԨJO*%ee tRC38cH?9BɷVǟ .e|T+e2h)ցp~8P\->q+F;"[WzfftտZ-_8JxFr: ۄ OIe3:gU(#1][HBq:+I-rujEز%a.m [&-<-}gtA!F*hcAo9 };nLqWC&ުzZW"iD&^'eX*y}:e('G88[_یRbCdq&42q5x A0֦5F\q csFW&ݽs734Ŕٌ]:%ކ 2zYǍr1gzzskIPbF z%;qAM&iuhA6~i%8Wm:}iVoǕ'B`zp"iqw][NWXYjOb B:= v# Ęb P}yh\/noz~ ag ~<` #o:߇yxLu W&?mbYR_>!e> ZsVNxG˻_Fw~tPhf 1PB~\o!o!gHژz*s*29̝$^aOWJZm"8-Cp 60)b~:z3}ɻ +A;MtY8)t;KSs皘Rl;|9okI)+$]R.p9 vڻWORWvB/)|$S[ʑ\A!hP`63V+`, KYdѭg@fDKhkk@M%n[G%k`$X-=`iEiUƂ:%/o3M VK[=X[[xw~a#ExIR.fԻ ^R\S+r'V*hc,siB ^,zۏ%ȼuC5"\304zrTkO20􇿿o81rݥI媓i{#X7O`_o/ sAowW7Iʫobvếu|'=m7WM]_ٲAqIZSA@N9oh0~ 5ɄQ$r8L rcO-,sUYkA >׬y tu5|;ܧSiLNjS'5~RVJ8'g~/2ڀr}pbAjF^‡ ^EajmNX0c)@P77R %. !;Zx>y['@b  ha.}i.PXi{W aek)qSv"`c]_99ꋑX/ކ'mB+#US]wUg6m4ӃO딙ڟGv_Xvy9"LS;\bQUH+siD` &ַj%*B->ƢCl,Pn,*W,++v=jkG#I@\4)']X]ؑ.R=r26;pЄabo['kYe|:%SmWAh [1} tʭ ~k5D2wt9c9aK%WA"G ٮ!a,k<|~q3Hp΍Htݠ\@LØ݉>xph[>Seҥ _tU23s !3]DRŭ?u I+`\~qw [Il0 odXe\AGK*7=^v>S*`ɧh xךqܶi7+i@78{cBpw,Jc#FfvS꭭8Hk9xZ'.p$4\\x"d"z TNzz%YAB˵vZ01&ޖ'GZ02PܱTk 8{ܗZ%0}K8)0Qc|N$lewSAAgJ!xLfhZݎfIx^ʈ{cbv^oLa롙Z$g Ǥ;_ F=SA+x:R+8 8c9$v_*h 9-Rw[v}S`Ս,nDE㹯9i֩qqHC=TL-եQe][oH+_v{b sbŜp0xiڊeI`dĸ))&dꮮWq X i$ΓB3xNh^U+hU2]=|.$WL4!Nx}&򈗯TqWK讳VH͓M954)Ϣ&SZcoc9R'N&Ԋƣp-Veg2O3Xx:aԒn'@A_'լUFM㰬TݧKSdk #߲|u F͘hqr4jhÁqo~Yqƣ\bc=49s] 9{ "){÷9G K4IR '.P2#^eRiITÚO_v#vף$p#+b,lE{]S*$B1=] ,/<[t|Uz˿{W߫MQHֶO f~i/?le <6Y,66-WM0x|t~3n<,Wt%i4ǰhZDՙ /[8"و*m2>䯡1`TQ)E[Tq~(Fũf=y,/D#t @hcM:yJzX bO{rK=Q\ڳbB]~tg !+M*..?oP@SChg$U}F-g)z =j⻜-zHưhT*=@s)D^2,#Bۙ!~:V(N,6XdOߖ>U_КI-K.'mV9}"\o=]'ׯ)_B41\TX[BcmgDBWp|ѴulAOgEb׮ `t^$)0QtyCµ\+9+ޣդU)Bd1&fTؑ:sHS ltb=oX^Z.OnQ%N6JbGp_Fb@kl`E"fNW H~mOeؾ1w+ }b̮ |i+P,jRїꋓ:^ 9|  z>Fԡ4ɯx]X)9lv.E,/b 3!Q2 rNl@oN]Xw)2,( \HЋX>621třU@lmsǒ_W .r) XpG%l3~X1:h,g(d*bVL;Y)_>vܭ![¥b \X}?$ !'fOp5+nGN`P\E 2=P p-ѫ@RR aAVA Q%=WԮ/gt:]P#%n@k5eۖɞڱ=<o3EU.jT1_2 ȴmyJ`v`*½Ӟk@iwE 'o=#}((ze{K+&-@[Zykg~$&{kaGmEW[/FzI/nכ@敜I7g,wS9;our19 e3f$S1 vh3.}\ݥ"% 7=Gpkb7k<].=66.pJo/8$>Wp7,m/n)ҝb j "E BDútCEuxs]:b |]j? RᬇZ]A YX!BRqT3Qt~ղtA"!MA!e0{V3 +wEj&Bg_,߹ٓ`a0oggʩ72q$(Kثt4ጭ?|+|@PN9A)?HI'v$-$i$ M.`ZNV54q% Df|7\K)` c Be/d;уC04$ѨzI Hqii 9~\8ùR<9>ĩ N㗦* ;~ڡ6^O3dDJ~Gt'Q(^tсۥ̑F EeE'+`ňKͶLP\)n]&VT7isc/W6%?aZ_S7"Y7ش^5m޻{7S% P䧼IeOo:&JWF"HP|phX1dp,PGTfUPdWh&eaν9 9D@*]tHv{`$B)A#?Nv(K-od]F+hof E$ʻz'1hǗqz{3_$"YAiiS>1i'e03(݌qDJ+Jrj*L(XrXmJK(EbIR 1YG|(]3ٲ5*Fe0.a2}m Evbq {5"(K&[4ˎ'we4[@ ,{̆wB7~dLV̚:954hh^ފiqrA{ &&icmk#DB=4ۈqoؚ-dQ* A> ro* |]@$xyٯ#h|=BR:g wgL&J+w8iCP[R{t1_+3Rm@/<~85D@z-..1XI&6& 6Xo&9M^4 1>\▹yϋ >Y!bbLdͮ󉻉d 9E睌wgTkucBw‡+~:c,  iMdss&Pr f%(N `f8kh<УU Veg2O׊D\"n d7H ǮKteBQ:2B|D`|(UXNUXAZz8-!("){b4.c[4'<ޗf78~58DGmr ֔ !gsfn'\jh ݊2.*z߅G<1d("h$jhUkߚ.gݥǺƀ;VZ|UݯYM_O4@+x #Z0 0LW} Qƺ:J[ٷ{_lğgwlڧkh o 'Q;U;8M4CjvYG\^]~ޠ_Ʊ‰1 }h=Gg}us$(^JSձ~1X5ɑxвs q 4#W+D &X[u՛V*D._y#-+8w\Z%QnSy Y\4|MƮ8,0Bgpo~=Qulr7SvVCc8R 3\ 2 t\CcǠZ@M0 0Wq {Eh1F^ O. {9{/5{[PhmS͵U8ʹx%OŒW41p]*P X_u8^0ĺ$$4!|{0C߯/ c4kAK` faH ¨V!Q5[5'Ol>`1"yUQ)epZhrIa8pr8;0{ߢ6,/_2M\J"PzDˉ|TUdYW*؍KzTD()W&ek43An @LɂCn[ͫy_5B.CSVkN`dJkے \*&ldF7eh (ESc ㋴*-42P2Bqjo'2dKk6xS-R,X4lSؗa2M*DiInm!è,CS b-80VT+OŃe%vx;}n a!Rg(t} 2#ٔZt,{ê3r.j82Noo@=cS^ L4yIJT~RO l"]= MmG$)UC:txE4X4:Ê҄IԢK k$J~PzIg诳y.U'-.LC8k0L(iM HM5ǰBR#[qz|m=yMNۓ ػhq;{r;4xqmjW(lS=˴Zm K)bdyk[nHFYuˬ,Eiwva#xcfJHJ)_x@(PK[pH2 4YTލGn"0q)k}{Ͽpừ#E0bˣKv^]Sy^ ,t2u;e/|0_o?^Co, =)4QݚxĹ^ZwMw-[a< /ڳpuDk?r ޅwVߵӳwq!({^AߦzB$ߩ p?שd{0Od>#Z{՞daž9ID>Zo w8){uO _]a1imu~ +32UB=~v`T|{e/5h>//q}đ=Y;k}WO@`j82#qqP*ǸGxר?ϴ~qg3?J ~辿o]ەvYk2Q[~ h&4p {5FgSCkU'_V +t7뿨%BT/) ozoB5x!8z]ԕUm_o8a!%Teߜڙ&ʹGrC5ffk?L/5&Tů[1nKmӻ8K豨~f$z͏F*M}s-wpE4p|Uʝk=? FlKz@"Ac@Çp<"Wܱ6;QF!!ywm>ή43y/o u+nrV %%#Zc@N{R3Nx|=GѸcѴR2XJk v;6P;rzUB +ʕ$=*z2An[D {7MGK`*^wP=TٓbŘ/ 콯iy5f֏;13lݭ 8ퟌ!M[WCVqiwN֭ciG쩐seLR{08t3:KfDSf+8{F"f&nxii=LUvzS1P/!O1bѮeU)A%U2%\kb{ݵb3vı«frXJU^@mbF_ mNzϐnRv\sn4gۺh[$pҬS8/֎MiwڅJ|Q*kWV UsΦk7W؎g-w8trJB`M!˝쉦c;GrdX:t2x~?OgauFCԄZի_L0sXk-3|y` ~w>}8q7ˈXۋ- h 2ܻ9ha6.Dx z4&$bjS;KPl5\AB5ڳ@:MRH`ي5ȳQR"Կm UBt$g*|q~%FE05uho)Ka]>wޤ;9w Sߖ] @gO4@gOw'8s-tn0TK. *r=Dٜc)E3$#1iìYsÖ;PoR !ub1[V;g2d0cz,Pud"iu$*wp#M1]9zv4pJ2oN\unC'm7"@^p:J2`ѱiVJcjY.E6-Ɗm42 9cI ҃({m (F`\XN&uri},Bͩ$T# ,ҁM3eNԵ`cJ:@Gʐ3MUR9ecEj:2=r7U,RXH5bj=h,dn4UT%C~ZCѧ*Ppy7bDu3VH[=n3or_M|gO7KGc"ؤ-K VMɩrߝzt 8xg݄Vh|\pZ^ݐ-C:j3Q[`DSKdC WVKጣ`g )53`/ݔo$T KUWD9k3xm%@u Xm)Fa~Wؖ $= _o :Zi5ȟVK;q<%XIEpHw^t~9)I~7alX^RWvbBp&"}^@ߥ«hxLɆ GPEzpmR(s KZ[4O0rk36^P$2 9>dz9^j̸WcƽZqD _mRQ^ǃ·K͙zĵ ˜L3mb7Zh@9X5\R8p ΜL932|A~xkTx2e[93]%+7GQtn@@%pvELZaȚh|"M𴡏5ՔއNH]pԵ9[ҚQl})3I~UwVDVL@m1PXKb:}L~?DO+S7K]^kc |(={OB:Ra:7@}GC8#F|<;+t ]CVw~i,c&ps{!\gi)䢻#%0H9 ɔ[ e̤HKe"HEx7]!FM5jdg$ ǽR-7xrk6%fN:E XO'q=Stg 1 t\0!f9`p!u| Hjb#2 Dʲe-26d~fÂd|GKz^ThxMHq$pǜvc1=A'il⪡dƶv'b?+g(Sf)VEM3tl.Z E+49)&&%~?RtE&֬GS(8CYd`+hOܙQF赟9"̰'.l6tPdž8JU-__Diy[|DZM:NE)&/#}H P݌ǭ`z$KJLNP@5%JQw- Nvߘ]ۑ)YɕF`.Iۂ==8\6C^ϊut)@gJI{`Iwf`u##^ vظ)|сv7jC#!ԍ n6FP떎_(-zQ]u uJH8}=R?m{Ǎb2(bNǍ'E[eQjխȊ~UDd>xc7/>^;"w58Qj(]1R]@w^WoM@7΂>R~>=:^ Ο Z?m5:Ǩ9F1js,^+h<lsdg=GfEr 9ÝAuA"&;̈#{IȎoP'7oSzVϋwuMdac{`ȱJ@M (ȏI$2+dc hLLYkЎ7u՚R.M$5J' ' ˃gCsJh#|N@*(E ^[  :s \h= }pkcٰtEs낾NWGqt_(RWa1N6< $dUq?cu߰Vks*.7 @x8l!|vJnbc^x4=!ێS TFݩ`֩Dž!삌4 YbLߌr{9jI&skN v\&Q 9(3aеE4" 󥝝OhgF;;n[n =[-W<ʊm@ n 9" u Ϭ艎 rbN0ӹi5?OocV),xbMV6,&5?yO>,&{dgyt 6d{a+edF?oFܐzS}}v0nOy`UͥoAt]?xÄҵVbtMn|//7_nfl fno_Y߯1V[lp[3ҫ)*: @~_ <䞧ӎ#Ei$'1)qu y6h1|Ot t}<rN"kCtvST k;@ #Lt;0aN|sxѣza|x$)/DG3! n .Fj)Fi9eN?TĶ(5<'69R=)i%jKBrZZF&9VrN%ӕ|[67gtA: <p`&yiS!'d?xAnmG:KtD=vXO sX#>g3'T?W]6ջ!Fk7e'3cu>[vI`Sd!'5ĝ*7`0OqLzX{`hptC v {242> C{k hH]50&|Vھ|ލ57C1Ckf;aFmG D@rΚO5AD`o Vi\Cm@ѣ׎ci a+Xoh VL/ ZDWhwVh7_vVv 6T|h|Oe>4nvS-hkZ`R .+l68vc走QcU˦3~-':@h)lWK%9l^{w6N*c 9g`]N-d}aSJ^?9T˙c=O=sMj&GG6 |5Qэ(S}8s-b;mձ:XbB=hזͰvm _Uj6yxV fcX}@mn0*C{[GaYv*[UE pͬ.$:H1,!Brq4h+!hc[M 5WS)_sԋGHDlb'\ӇԹ6P}*4o`"Ĝ2t%2ZR(9C?BY Q::{ 1keQtdȼa:Hk: |B蒲[y!~=Ȭvk졐eBGy|/da-G'klM77\/Qa`4fs< ؖA]S g|j\=L&Z6fSXO67I Vkq3̐~mMk_Tȡ*g. kw|nA7qdޡo_V=7X|ǻ~ X5:8g$//WWގ{߯Gv$E߂;3~xnfatZDw:tT;ˇQ])N.B5k(5g88J!.C&DG#E iDqz i[JD`&}Bb(pYpK Sa=wxL5r 􌒼l*R'M1J+j%.MBA׬[)] #_{2Eʝ]BF[r(A21hNOy/YsE]]վ 0@y08/ ;5r")p`2';md7fY8 c$ܗ1/(;/eM x-*f=&AGo<7 xu.NX9kurbN.8O=M[g}eˋyO 0"^~ZOˍk+7|u#Lb#nm-D&?eo?v򝾬cv%da7WeRqg{ K(ŸIS&_Gc*4%~?ZKϜ[$#iiQ,,q0 !]9n}Ӆlt%Ba4U^g *81b*GS"GzPBn]\q@^[tǗ%stb)*B!fy>\\vI\]Xg&2L+&ebX `XkrE~39Ka-$7c98ܼ%4~Ad8lpƊ}IƆ Ѫ3Z.Vsם6a 8ҷCv`O>;rbNX0j4p`嘳 ux8ke|w& S=aRIy'` ͭѡCckBUmC3.8I>{#Yi11 AhGOɒb~LEF)>>|?P꽏?V#=FU죢uzVn.-kA _S#vs=g,E,jjM @ŗ~ٔ9ߨ׎TyTKPW]&]I2bU2bz|V%gsdӵT#vmzF.3]ؕ)706K90NTF {U\+#GY<`u< àܖn} J%)U%UUdYnGJ% #dJn$)Y:h0iW_}х9=rqSl (=DAh4L)%/iwzq%#sۉ]o#v-Q?iLwjyKcg*hV<ǗfԄ@}mɖLšbR0 tO>屖R]0*ݱ$\);])S=}Qc%%'OziYd7= w;ՄأpSM=l^al0ܝ 'zχ~-s~9L>DyKYz׉.[d.K/"ا͹\2b98k!"4o[*] T]45w,e1HejCLPbB(bC"rsFO5FaȽK>U}={,_yGe6r2ED :4<~w:By}7MN{n=q<޲XnU_~CyV~bTV7YO@ `]MM`HSq8`$HVKCVI606̹`v >a[\5P 䀩(UlG=JeGs9BD知gbV^E "!r5³Zj>vln$P\%&UhNZVy{g^q`G<Ɋߡr>x>lz{9w<>\L>SÜG7i;nxѳt[W]G=ILf Y̖ "\P`5ρ۰e$]|}@Mi +'N*Jq)Y aaFg$x9psi|w?5&U)^-Yyj&'89L w)X90driE.|mԛԕIX D0/ξIOM̥TMBh}%z]$9(L5'G k)Ԛ"24qɫ+UUWI;Z2GC%Bt9k/#^b7OUnpھ)\N~*>NΘ)^aU\NӪL_MM|?]~/Oܥ/F}J`eaY}+XZf_| ∽JUQ [3vׂo>r ǯzYDAUzSvkyѐQPzyٍ!D%UPuׂx&QQA'LERruTڊ2|DLrPMMUWy:Ad"W)[ n!j+ x[Ũ)-Ҭ6~r15eu{MF{㞻?ۄ_z4zu #66~\M{]܏<u~4/EoSeLK[ fdues:WDK.ɲdvP#NZ?ښOѹmjOcY^ܵ0~+@1絭|55!*&~ y2 6(ipsf(CM8ԐAd! YZ!^! Мd WE̊I?X7uNvƷ4lNYɐPrSP) l*DE<L)(~]*+n%U`!6ף۳dBFo9&ǔ"K.PBF-$} VDjJrsN-J>[9do}O,[n{9nlo"/o3Z0f΋{n,TzA(eu=76f$NMyZO+.t<%%Je'ZZM ~oq%YG8Q"އ`ꄑt1FqR UeIAL1`sn\fXdYʌ uzpgNp Mf8 ?^}ACA(U2A.եaULًb4td a9I~!'Qڨr|u|?:V1ީ $/'S Gׁ؄{uJ|~|E.E6zR䤫QRRUn3/ |U[Rs>t:w%*0W !JK%=Ė/"eF %jYSr %a"ң5a 4Urz [U+>pNzv]uEǭIN|HE ~cύJ g~的 ښ^"TvB<>)je@vHl|Tt3egʵ~ GSrW>vsf x7oDDɘt^Eh|ѡTCY;q'ȉ*NG8ۡp؆c}Lжk//{YnvBSW:o=#-;qG=":i"ryV:R>AiDM-mv (QԢHyh$1G^^U'D@&(vN%L U4e?zпXua3T={g$>`TH[ Gpy`WD[D|E~1J@`JE\|t0{e#=s) 5S)<Ӑː )Рdm, X(ibۗ3E#IIGk3_Fj$ < ^:6G \z&lo\8g~HZ; YPcT^Lr(fhԏ؝ <⬭q5>EgZ>CMʨ}!`P`.^YCxډ]7LkI1~h3?a렒2b^u{(IǼnt@Mħy 'E$Ѫ;T߱EЁwxHVC^M& n"Ktvպj,8%QE܎]aiX17!g$D=l./3⺇]T>T $["o/⺻s_zR.mtA_"XWD{;d|ؽ 's;B/teGjBoRGm [X”ycT.se>Tz3PvB(HP3ElӝCB4N.f!~"Bbty=rru{Ǭ?W#VONWn.׏e~}\TOVz4]8KsmgS7ѿ^k_?wx>,wǬ! >LWlKعc?v,;0w\1i}Hv3};՘iADE,3 ~Dȡ.c@~KCC80}Kpn7}*OŲOi?J-"m~t4\Rs&$" 'jn} 'IA}sh'_B/$dz~2G8O{&3+\,1I8e !u]!| A"Qt= 7ѫ3B;rOT+HlW1sVRՐףb] v_n9b3I īQ ͏3pËpЯM ]no3{҈21GETLLtޑLl5x-1L(O{uӶ{ ceC^M KIʼnZ][N  L|g.+..{[0bu‹v- ӎI:6зdHиj:#e,Op:;AN=Xf$htW KƏNFɓ};dԿ_.$iWɥ*)l'yȹT FK/WIjN*$%!5)µ&)M N m3`tCԂRA5kUDpBGގV%)kJ\|'*e'k/Z+ $(M @BR f$6F ˯ӫʯTWP{ApQ! /U۫'yw@@;vQ~25fF80n;i2;4F[c=/L4*zCo׽M1Q;[GBX(\[)wR7](l=y/ 2`^Q([c0,"CaVɠ;cCt75wިZdry(쬰K;.ZBhO'ʈ JK$p%Kڭ(NlO@.7a㫛Z dt=j"͘+-i݋K O[ E7 ]u8* YgJ):W:e$wWeؒ1yGžN !#vmqi%9v#IK3#סfIshL H}d/ȖJKC<3k^($-0arǴ#L"!u< 8,4_JSՊED8r4)iMKC_d:N벳jYr6p6?FєBqhmG,!i1V.H!-[[ЩHFHl ?HcE[l9' u9UdQ$|pPyph ,% hJ"_ +b^\ Qzo>|fƞ~K5DTX'ŭ;Li-̄v| Dv7.12UkJ=Ր io㍴M63/Ud'5oHyQ.ƺP*hEMzfn=3drό igv%ڣ!um,8gnHէҤV09wtOV.nKttj퍲jEv纞JЍ{ ߾d`B9דNX0%"+ EsҖ;Z/ݽuxwJ2`#cSYފEKx+fw ~W^^,- Yt"k {ڟ؟DOtFRӞ= 2Oi+jv|kujGYɮ-'O֨Êm#<АT;'\"]4<#*J+,+FDn.Dڇ~/]S)%k(\bl.\K䛩eoEy, ɂR JG\e.S -FxFنLu_/u.aԒ1 tzpE6{5C:]  rTO)hR lveG*QLvXZZU݉ d)ى(oO_3*ak}^r-b51ww+͝gZ%x¾"`sQ1kQ:%cNX$g#?`E2OHݿ#25j%Z&$r[K!]]p (-;pVoÑXj)qi9^<#>@GZ0ԂN Π9ꝣ+dNetsohZk, k 0.>Oll! tpWduu_? IԜgTp(y1\jCef.m[?-mr%|I(nƷԥFůs1= U~PhY+Lպiv}DYGs }N21_ dJ EUIpΓ?bg9D-g4fT_KԞX!PdbJpe:Lp!N3hz u&-s[PBbQrd>lkn?2]!X=eQ#Z")r#!Ҭ64Z=m_wVsgΏzÛ%{M~C]pW&tUe)bMO{:q&0t낳0XpQ!}{4/J U!AqY&TgiY?eξE~6CKpd@aDѡv!eRWZi'5GX-mt^3i9hDЉF 3n+ɓj|"ək0S5Hs$ľhHEyPS@ɡEHΞ sTI j ͭ02p-}}ٷ\!^o9Zg[g 0!I/5)էM_RfȐ!C|Ӛ[ {`pZ4I`A 2<<>3FJl95!+Nj{)N?gYW Ǿ)LMY'!QYq3"X /"< !pw^WR",l hpńSEj㼏F`C"Ɂ _A9()CLC̷uif܃ж*{q:59àX<Ɗhˆ >r JPhkQ [TNKc&ooV#+8tß؈ I+Dy&P sx6W>$xhr_}1f1G;ԂZ.9Hge6i$ /ev׫VOƝ"^[5\SMgg8c2VщE"H@B9-ߝ5}*N8H޸ڟ穯yk^k_w)(A1޽(Te9m<* >TpsMh'9ʼ("}{5ʁ$;ӻϦf3 ڀzX=OZ`%' , JuCUYĀT)(.ȀBU$ğ"!x|YH} ]`*$mUH.@EJs;DSx:B N]HK'Q˓jJ#6:vQ~2Oh{%R,KY,eɲzl+ꌨ4f4 .L CVj- xz0$T.WeUu- i5g m5B-{,m'!c%Kˡ:(5/i<#C  XɲRUiJ8+m}yO:vH}&DI. R%Q]5 O(w!3Fi~ EWGd2h !rڒMKC|rEx%mQVRS`u x:kdx* ݊3@O7!:[=mIxY,'㌚ VNPe4$h'Վ^^ ZnT=B@PtQ訝DNOl-Hr,6 ND^hWn -m-\ǛI;V2L Af8gV#@2o=S|$MeENSdbUK8BHD?ݻd/ݽ٥B3y~%&w?)S#k}:wx2׾ $)[zfǓ?.s5'Z`G,@d.%j!\Z󞎋DɵDr.xm{|HISM#`Hwc=vMM+|_1oC[⹓>o>|XH}\A\n5l+V_}m2IZ]]$j&,El$OSq#W%4Z"4Pو>08E2Q" ט [ 8kV+!BkChu7wD2P8ϸ;ȏ&G(քn~>TRExP8*_O2a*lp Wn"߬c>}oN"MM!-LW>NÝm P 70Ӥ5&.pb!"SAva&^k X4f$OŖl[֐U0cL)UaLj İ] :X+ڌ\jWm.P˻EhB IX'kvA^443Qߩr`(㤩%:;I , *Bm(`ȶysyY۽'D!5/CA" cBVWYꊗEZ]]"r[~!{D]BnDظԏ?o&7ysxsy_єjA(Ga P=UD "P'{pT u Ƿ_L}@xe3/m5*FDiQ5ʧ6=2-M+#DԪd;MnP5`0^)4UaSn*2 e="=]Ww.?QvԨՈ!$I?j`{Y(e47Ls/GSSX &F 9YF{9@O"&Fouӷjsϳ-(y-ϒq M6TAPɔИtky[0kF%Vj?␟O &Rz;:{c3y|{: onaR[YZ{1+ͻ3xFr-O5Ppxe8Ҭb \X"Cܮ4X'5҃ X%GO z\2, A-(pVOE}1U9ښLS_ڡ׸\$@/ZZ=ѱa!ȕr(UZ%EN%" H} eQ &3SV)VAW|AI 0tTLx[)LD(!Ygx?cjn *!UyfA&KDGfYbYC JzEU"D:\4=8vWH$ŭ0MoҖ좽aɱHKac39?;拣̀]4zkt(㾸"7^ƋTx.nlp* i:JSH0ʻt* >v'p9|2 z_ΡsE:~ν_Ͻ1GjR! Yq1v J#ҀDGcD3هJTt5Ow|lZ~h/'+N@_(ٯWd}_Ձ58S*Uhn}9#KR5̿ h>2{&kPQ)O=3Rl}`^ݾKPޜ=TI NryU"%FSLbNѪ8"Mme[rH@t(5gGe&n_T5o$gIH!\S}n$3FR,>noj+?$ԛfX&+gcXh,f-úN=ŒA])A*&aiԎXtQ(VĴ`"(=Un; Vl܊*j/EVD]5p\Q֍)sZFz~9"ss=kڬ%OdbG3KFL`2ZWpXf(w7~[NomEJbEMAz(CE4VR.ŕu2n1kٷ `ou=,!|eW9\$n4^'{Y_wFMo2vCc g&"d1Ǩ4Hd-A &D#A?hp 6`GXb1ENG8/:AyqXeʋb0BIpk9%sYD{"IGM,NvR)/QsLD9EMs>|h(eMe6 :-b'RO n Ν>/^ !TB?֯ƪ[]$E|Do]~O!?dmJwj^f6Ek<2\.lٗIs_|Ûl6~{u !s\#ai'ؖl "`#M(B"TjY88TCEszz}VF\ _I΁03?No+gFvQ ,ж }6%J' 0FyD4T_U;Pllݰ5y Fx62|G%[27*)3#Cj\7NPÍ'7켔qf NQSHTK*K; O'^R>Yka1|/?[=HQ:I2at@amf'H!E!ja*D4"ΜNz1æAvlM]iIæ>q$5Ϝ3sk+eT<>Q"<<Az=m`B;;>|PUr9IVk2(+56JԸ̤r{YkY"0t9ScxG[\BFE% ) !q?U4hI֚4Aa[4i:ُw x$;!?UCbP=1t^)*<-^xWe zcC@4ը[޿ջϐtQ䞵U&bDE2OS.YHӜŻ*sVvXv遠4gU$l)EalaO>`1eXldX! *^C]&׋  >M썏ٖ'cp1L JHRrBHOF2#Cfr1S y8+82 ,uv7,M4m_kJ4O]4Ћ@ c4 &$FA4`" $4В COTaWV7Z˥,ҠS@tV=H6f]nxih)H:,Ј5 qPT*>p[ Ee063YXXKjH^kVתA.Z:"AM|‚l8}炭o ._ے,ֺ+ݽ7\{@Gr)h,]z>@gW? ?[FrG:|pu^\p bqIf/ߜa;9^Bߪhmv|hQң{9&_*, 駭C2P~ UC22#ѝ;]#Cv. !֛o9WZdd\e7砕,չ/,]^Y3GR%I&NQ-hRF~tŻ*n4an4)\*8~[/\Gz?,{Pp1~v}ר"GD3`q4_u2}Bo\ )L/*gIbjLEZ%F% &i/0471N8P,4@#Q&}gIg6nm(3m {XZY{[h4S̡xe]V9'Mo'. w2L\Bm)^."-CEXsF'0ZbB cbeD uqO WP`'^cwESLn5KjCjW p E$8[(h p$f(4K0KnI$!9pR5m 4&`(cCW#%)Ī1MSQ kIepҵ iҵ ]$ k]V` putw?o1qb CjUuJ $" `Ȩ8!TPB/da8ZEijjMZw PA`r7': )^hXN vqQ:Ϟ?ofIM̕2R@q^Oo ; U-ܺMxŮ˒{XB1)zeIo,Oo`D@b<0% Q@&i# }-r TȂ ̾eC9#h2 #9rg2($TqAG|3ǻ@?: 9~~wrʱch9C2/6=j*2G|]TzL%\9Az~{uc.-m4sxʻ[3J9zeUb.rּ AY5JIi<ֶNeӃGaqQv$.+# G2KI.\Z7]N` u U} \Pbv rDŽXM=^;&_V{6C^f2à d0Q_≯^w}@b%Vr'ӕUj]VfC2NIΊ?*+uRƌUx_OG]:ǒ>^>1#}L-~bwx%;X*YXϐZ?ExW uB't(GL4PZ BniҝKq7JQH{i_TH{w{̐vpOшY,7D0h+Sc5UV$xB Fkpdc FkBSs᱒.@ $* Ib b L$Lp101`Di j,_^?OuE{@sqXilG!bVIQ $|$✊3QHBNJQxh! asiNy񶖉Y݌¹g%{I^FuJ&rQgsעf7/^.+ٹA$&B4 ޾-tn˖ 7.x/ܸbBP8 V1fV2B$B#yDLڤ<ޗmeuDiYC*A2z~E M6{KFuiQxhA֣! ]-G50j(,$-6?]ޕsRmnU<N{l2M"dǺ5z-Uz&[EUAYa`&n)NnEW_#i el^"sZ@8H4o_6jtSxM&4s&ƻXaRJ2 M04a0(Ŝs͸B0OC{sJ鱇д Op%:F/KQǃ͒߇6r尰ǒK}CTǓWdƹ4)H||Ȅ|PP.OwExep;{fXi8ya_4w^`Euwi wL=}jhx*(]l}$,!HXD. pHphBH`JXdRE| R|eLgWn뻏K8m^?3nү/|0/f;ϯFp]>oGv ~-~D04w/$<+Tk)G4ܷp|{y 30ߝ?z={Pj6֚û/iNݳg|u'oo g_<->iOn$ܧ<F1"]N^Lnӗw^hq[7[Ԙtʂ*>-\F.G{ûnǠSy>t M(QJY0LpbA% I+gy+m:a$x$LxAw?vQg<39(sK? SUgPW,qlk2Y&/}/(C}eK6]@ɮ!o_^o堖_ld;<݂43ll)B=a8IlL~ʜWU\+?^+IW mEf:uڽ_j}_<,0=+$"eP8[Sg`u*K Wo'`?a2 8gOhrٳr XIemބIԫf{J1U Sw6rm?fC?{FVx/2@v;`n2lmOlkd翟$K-eɞ$Zlօů*Đw1H8Ikxz4Ak?1aEU|p"_e 9j 'cZKT',ZLM% et3&ɴp$LH3SXk}gV[PQa'ǒN pLJx3}мöd%$| 1^1ֹ t>Ewe<>H6~8tYhE@uN'吼ֵ#?BpB> χ|8!{Beͧ䋚= s B:EVJX km"BVk?(Kp)^$~,-Vi@=(r@'"D~P",t6 xʨ cxfR1{B˙}i8R܏#wtg$Sԥh[C$r?V!9`C&`H<CrOE""A:%SRft!+r`xA/ A8&42ʗ멺VR+=xE[ VS3:ϜÿL:",  C+ttpH0qs"z--x SԵA{_K]~mpqvͨ H43`xL3 O)[rjTЂ pjͬFQamHseS|'4c~h *2#\hf&".(%hJz̢9g!=g`QH Ws#6IH7lQ--3\P#sD\*EoR@`~PH L sIki2O GX.@upx{•.vZRK0>w]Y(M :R`F$=t0>F޽{2C8gQi|9ۻ{Jf2N:s9I4L#lt-7g}rs672v^gNMM-m6ߡvfJNX{=:Vs87tGͅ^<4C"%OMTqr p%Ƭ %WV']q^Tա$Y/TUIb..ky 3TO5RG~ w\m1fT/R,^*I2J q|f)2xϩ6F^Ĭ%MIeP7BW*8}OM:[.8W:ޗ/}瀞UkL(hU +u ]p..1Ԟ=zTv:++XTRWrQR^Uc咫%HuI~ e-I:YGIKd !:(%Lj2am5\LY*)]L;VN/$8Ƙt|%q)l̄ފ9`Hw{i"@/)%3彻5!ˮ%_A iK1JUxe5] j..`ZF|zjS@8Zа e1 IiDw@3)%1L~,s4@tv_KK{IVRz2Y8I'TuRY8i|OLSKbm^|պ$OԧISf9z buZK ϤObuV^<ػvfpjOmI7q+DTg5gg'|)q_(v>|XȫDG\i"{l?M%J5)SR+S?3SK^n6.E|6՟@~K tvi[z;c$xޢ"6)T_FXUyDDY~,X5ͥ9 )O۫ܡrWap1.| r&3PPǙ2gM^p@y H`Ǎ 30R0wfы5׋^0r7 m(|9Qy49]K2/}oiLt9DB5}JtYTJhh0Dz@1#SrR6@^<gg5QyErU ɕp;UkqYT|b)ocߐ?Vv3gb.7g݃IJ9s#iJs֖",'I ~qiQr7 @?R.;S_J N*5R+[$ h:r A`$Dyc|gs<Ӓ:2͑W Qo%6 MlШ.@SqVhI ((u:H"sbYY =OrP7;%$ iT h"\6mQ=AU>\k-1@F33|@mJ7ٞ3eNKBP#,$5Zvν05'L )3Z5a|2tc=ɿ~?-4n#?_~˧G~s}y3:\}Y|h?G?\^k׷o21#4[s;~Bv$Y얗y=SΕ:߹Y2[4厀uRCF D0w?$b p7 lmJL\IۧvwHAq?4\Lȩpq/yR: Tk|x4~ 3J\Cp I6O>bD)95f\!weC#iKnjRs-ݴZ+j,PY6TֻҥF07(t L4Ź^BAs3L8O܃sRnCgϝ $wqX+Ҁik7?81*Z.vS|v qT58i@ЅBWJ6c)Nk6) 4z-^5 jw4^н 2J۟c*:[{ VU67ZI B(yd `%>xXqZfrFAyPh$-Pm19 E`9|c m~ ED+8>)XF7Z,"csƷ*Bfd)ؕhp\+qT%^s$ f(#B-"1@*(8~.B&E9*ŹRj{86& zhMX[5$8` l2nx}aq]_A4h;yx(WxRQMRe}C"m$Q9NRAA &! x%"A B\YDM= \2Ze*$=';lT0íB6žƊPeVc9G/clEَRRqָI@= kV_CDK-:"# ^v;qH0sǥY ;a0n{q] q I6-x Tu<8n7Ӏ$R\|;Ǖe6LqqZd3g{~coG5ǘɒ=J٫zB]d3xOmo"ɋz7JPUMT3̚k=\ɺ;Tcg-_tJ'? wւ`3fs&sVR 9Dԓb/%JyMeA~7ex]~=755$ o;^9ޏI(nBeoJV}Ǘ)'tEp$ ]*t c[:hKdi&1 р<,xX 37 6yѺz , ȝpC FkQ@L?Gw38RB0qoiETA)e!^ Q(/ :i0Sf:fȫOZ}W@gݣY6N~Y6Nz/Ƶd@W\>(Xo/fa@N(_=C(qlEymUOfjtvBD1ǜ-N#DzmZ,,IK}du@l Iҟӭ 5 N*N.N<ڤͅ1>8;Kr.gUP7s^2nc)wN꺚>qEƕ,5Vs.P<  4qCF8 "*..T2m j.}tQ_8ia*go>[i(W VgHl'kZcqNzwe6IJwTq̅W-u9A1cc|=K^n+B](rll.4o\wRqBݱ0$'@~X&ԓy S~:ެg:LyD-y{&OD_j DoF0 U@26J̄K='DWl1MLu .B}oY a >ZSz,4yET<#pn*`݈>X*kBM|niIz%KmJOov{pKNg00Q滲ĉq,paDg|P m(KfK Jc/{WiD|!!2bXYTB>6m/wKЀO1XӸYkjg Q xl@QW |X?pW77gVPuF~d$* F Mէ4ƶFX({}JnՁZHDzI!r$tVP]ޜWKi;?|vN^76BK,R'1r.cy\j!e6ӗK&a˸˧'։i&|xp[ӫK|.h)pO_$5qv(6O$bʑDGq+ύ_X eIZn Q6sF2Ag3at{0zD]kID GQN (f4%-%֜񦇣ݫn<5ZV;[8|zv!*buB<֊ ̍luLxbk=,- ~kd1EԏxaѶ`S#/FdO`"XK) 94'9%ȬM5©aAb, o@cKSnKEb9pD/ʠPRcЭRZF!o] Rcsx :"%UJTݼ;IA{/3X@P&)̗ GMn$޽hrGu#ɵrm0EiF4jKD(VE7+ncgA,r #>r=[!?X-JbK2 B w=v`  4U(dQ:I1#%ႳRRRo :c8&(-L %KCMU맅wږ)#&-DaS eޮ:D[CJDQ);T[>^re:MX ػKL%4AnȮy`A@TU:24qVeq{E~=*`#2P(a#NÅ,A0N\lz7~5-r{ ?wJ₤$`B•ΕAy],wx0IPl E) \;ǴbRPK6xKFt!'yE:ryʔ.n /b(yJR,#n:?[l$!̬P $P*!N&%rlje ;A=GT  /qi`$Yl!Sxs *p UíP= haE .KjDUǦ/A"Xf89r(6O4. kBtCV7][n9Vt Bx$:_wlvX$alo2d9&uccY=hc6"wkAhL)Dc{AV,8aR-f/pgQ2O)Inj\c5J5~@aDˁX1I2ɕ~},ե|YeރDjiZza dil2ْJ>Wf\ooEQӚB=\JLQa9 ϰY012 ueeDJ=akFK#Լ *`WY>3c.k9[i |}Ids:K8R8࠿-ݥ*KP .(!fj5H0Ԝ_%_{2CBR4e 2QD`Wْ-+Xt${d ƼDa&Jlg )!`C`NcT<)l H ~SK,-x8`q>l:=]]ޜWV=Y{"˻Ť\Mֵ5&ɽ`fѺ̣7?G1agwsw'71Oζzz?_c5HY߄E+/ˣy#r3Oqf|4+ʬ~~L0˻yUI歪4fmu[IJ9V>=IW.92%ȞpgͰ fu>*Ok^f-"82EA}Gx@=ch1h\D)`~EنZH"-Y$(٠b0zke)dor.Y뎻~KuZѣt o6fv $Z#b^O:,ۍ\s@ᵘAٟHfBrT4SR|hЁ+ IV:Yب\lY+T?q5SEYY +ͦ݁gt4RQp/H^px7zztHg-*Uܫ2g7CƼ\^>B2yÐ0ۻxù H}[]SؘP< i-9kB8U||ar>}>2nO|EA=<?U:{NA7zݕeQEňT7exƚCⱑZ3e^  I 6jڌ `vjlO[ 9%8W5&lŤӈ""agއ 2\d3`I(sV aɷ?er/T?//ml‰uTەHl,CX|f/^Z 1 cľ:L_1'Oqr$Ec ||[Rv cKIc1=-C A a?vuh몵(V0\c5=4jë,f-R(Z:jx+ ʢ}{C^(Uܓ =̔ q<2Ozh,xrDG$?ݷ}Fܔn9V?{̣q}ǘ`n:7= 8_bVӴYPŀz*?o&F)®` H4-34Ks˪χ>K}oF1 &^n! +pYMHISmd%Z=FF=gŐ1ܣ4Skn-zz _}Nnտ#'~ֹdحY^;9O/gtӘt)"g*+uY>g^t&! `SaEMbeq?Q&fVDŔkxϢ_C5t `B Fg; >&z&|tIJs^ h5*pz ڢ@dX8cJI,X:)jZхe__}>.UBcA1dJG}XۦEydH` z4>EgY3xXQ1f'tJ>xXgcØA +qF0_= /bIKFsFc?o}ՒZ#v/Gj+ut`Mj g`s6=ޢ2t^\bֆJ$J9H-a_%Ǖ*N"~cѱ\,qp$x7%?Cv L/~HI#뵯*.S!i[(=#a]R˽m5y<]ع1x=W7J*63f[JS۶ Weyrѹ|?:rY+/ s]L~X__]A/^ .!tg0S5,de4!mݹ,3_$͕${ gHs+d&H2oU[["C! DXOǺ& R6h\S(eI:!LƖyBQan ɝ[q ꏵmY kamNs$NJMgq 蔅XQL7P1 bM/l7]ւTwv#64B$FK; 3!+1L7&@iqèyhQ㇙V:xF"%vZ3A2s,X 6B"ApTHzđNY\$,&NޫQ}  k):dap_K=t7;~(Jjߒ~ 'PCE=,7 QFe`9(^I)XR g640UG6ZY߿>7ݽ$J% Ml@ʚz\||:Յ.mM099=F%?'&#L(2YBÇq{o?qa /(Z4-%Z,wxPČRAژD`^>H9j$I&ѳ m1.(":ɺ/A#= Dk!'H2(tn$*رSͫWI\z=ij?HVۋH}XvY2"~GOY^{';"eyh{6S־7[g߫睗o~~k=<y7V285-5H_oQi/,[gw>H+ds̨DQFMyVӞjQc@q_b-PUqp;*zjQCйNz7iR5Ld q~j'ΏP F}J1T_V^T 9j@8@ *{QVC;0د>}|(c$O9 YxQSSj`'Ev̼P:Vw1\H.o_dݧ*]-y:=%~d\Um }qUqr9/'of޺^~~d,f(kZ>G|KclyCa&%?8`޵NcmƿG,:{cس~z^/Mg1`st E!]ASx/i2B>V_0^=!1r&(9F'?~cxѾ%ބ@vtԣW?;x UFK{΄߻/CJq8 {# ^*)]˯CZΥwh%`ćѾ vtKF+f7Cp va %@(X[;,IND3"B+rS"2ub+Ϳ͚^|L"f2ˑAህ^bc}|*Yܩ9b&E%v´`~fR06 蓁P:, Zhx@kf$!I9Be&Oe>;xI'li  6UiK<;Hi;A݁(*<;8T7'A!nي܏g&ZI(l]J>6F8;SbڬPh`%>7@ :> &c P+43a8:?+\;oG^d"|)%:#qť:C]Я2P W(%(`p`:ZAX9VSr`Ҿ\P"6@!tJ-ZV[.vPp:%TZMg83@3JA c Tǡ1p]ۈ`0q'&/ux~E$7%_n{F$e֌Ю{ ƴ4@a$0ɄU@\ s~.0n%'P+,Z @ql$!1Q&wPch۵%P’/94s(Ts޿%LcBtG?~eLg,PTQ@9>ͅt9bN@z'ZD 2z ;;Gw+Myio3ڋ.c]{`ٟ %ɖP F /V/f<%TȶժsL|\&(_g`̂'p"B}ifQwNCO[KH@2*.JN{@ggp|:}L?iT}^(^*:<n?b"oёBfq* =h^㤎!1Lr qk<]zI+$`]LkJ-ѶOe^E=|v觚mVϬZ ua40|wV~Fպ+As76o3=ҍU d^.RO@'Vu8x*b\IWaаˇԫGqv>G6`3 6`3+ی.sI,ĀASQT0yD3skTHd($tTOVy=y&nZLt=y[R}GϒfR\+Ř~mI9_ IٹdQ.'I1 ^/U"C5n9uJ~^mΈC`^-'w6JGvahPG(|%>⥧ B^rAΕUs%bvʊEqb|g %;h'E#qKCdr4z"%E1 q fhhV-泆[5{<qToW. @.6Y$TKN)VANCpQY㠦w`.ASbJ N[@no3Q~1 h&&j`0OWTٮgA}9y1nwҿpԎ򢮠)4?_0Sl'AOgysef @9+&?kM_ԛǢ [N.-g߮ܭ>Lbn:M_B b!k:V&!L'Nn 5X1Ő6rn‚Xf?g!~Ġ<E ҋO ^XYK9m^c^[|2]ϗADJZXuƢ͛ąm'PdU+vJ_y,FGzLeO863{g.Z Uy˘{5VN^#)t[i=n~o%(-qW΢I<5. Ri:cԑniSXܵtK^htkCrM)GJ7[* bX'u:m8+c*JM%n nmpW΢y I:fr47%"H\"[ރ>b9*0n~{֭=FBWj6ͯVC_ Dݟ^ƾHX<吵N;O.> #CgrPCj,SFdkk ^*ޅ-aH_o# ;q0N h2LEbS9J0Fh(m6F#ZSD9m8u$w*-bZ#s.DcxAE)q؅,eP D=|%̣Q(H@!xHRQıs؉d3 XapXXg)괅;oz̚SEo}|tnˎNyG)M.4z$b|Q>[;F2ӎ 3#$-#SpqF* '=k%ǠDB;:st(VODxDUK>@JA̭Gu.憳]_̥;"/ESxJ.{jH7Eh" Ǩ2:I0%ԅF!_9&wJ7^j`@VCIIVJF)acSbs C\q0kH5b"#4( v(a:&3] tT6 J#H),1Z=ce4a<0\3Xrk^JЦqޟ5 C-Ris:FPdjB-ՆP68+g>n:W* bX'u:m8+`J $K^htkCrM⩦Stt[* X'qZm@+JޣtK]htkCr) xK (- dfvzg^~QLsxIw[ۺ( -n 3r XX[RG8R,i a-$ϱEyS!9ɱAA/{6Ş} ,d2 I6mmdIK&3HRfŶ -U_uUuuU5ӝf)Նaxl\c?A$a+f\1t華rm6KLF %(R$ )kSr؍8di4*!(JXDՙ&߆+AܔdI,TA˰$WjWoEƛ NȪKT$, +سNÃ@_V+ 29dݽ 3ɕ7{˕>39Sep" hs mL\L|V2kڤa$%$zs;0l6{0ӛ2 !rj\TrWdf%ΏgSB)c {c?;ad9#O\G1ւ`~ 8Š.G3*YmpԜ("(6CQJwg1X6vYdENDEӹ{LjRz\Qd݋6vu[yOc+ַJj?1 .>c(BոOl=_S $|}d]4N'ڇj3t'hqѪk5vkJ=| t  =:^pUņSg6 cE uOPhf+FwKnT_q *٥]=ݺ{y| *5nIV8VRץRq7&hZM~5(N&u*ndFVu(tf4I>fu+\4I>!UP눡Q>H܄z$ssqm&zPcL,vF)Zɴ41ؚ$IYRyF`<0/K >H`uͦ!^bȋϋy# AccT˔')ױS%&e"6 +˯d}$^E&yVͫSkZ ]M]t{6zj5#DB 'ũ CJC)UwB1߿Hd̕DEgUf` o*.oףV08';ՁNi((QEJY1HFJdH(⃲=m13b E >؇B")#O>~d4ûC#@t|I{+)pV5m^8E i#8wJBn)[%!!΅O]+mxP4]:D])Z x8j:Gsk)*! v@R9vxzL$0J\%y. J"aW"^P_{-G@bY7A1x+'7&`1`qn!8507ddEu; N~oj1qnǦ78uB rp8)s-~+.gw:=Ї@)Q<OC\P'%dL]2iN#X__"kirw:ѓt\#>tQy뀬j}f*h^i&,gu%Y&EH%oQDΜ v*(Mӄt|P`.ܝ$6#/Igh0f/V;VP44|1AX#:lcJ)3J>#b ǩ,_P c|$-LpFMF#$$R)"JX [i VfuIn5g95kPa5 J=^gK[;T=bڹ͸ <^\K0HPjn&r!Ǻ=×tZ~PjIխMNF(oa9=ײ6B x)@!.~G hsiQK@^"֟ւ4y\riOKI(BRg%]Bזbg{'<=6A숽24 G"x\3)܃sq %ݾaJ̺rq cn0\"z-.-$ xZoB S?j$ЩϮPCd51?/ZXg5cc9LftX?,^|50Y}T3HL|/s{N%:? op$ZsVQnUErvy7hPذhͷQG|hunss[ܺVgd6DIcX0Tr*HD&H#,vsNͧ77l8Nw |z3U,VY. 4ŘŽuu Sozͥvꂿ/޻]x&GZ/l "ܐ&hRkM5JRLaGWˆW_i+v>s+jP71A]dW HU!R.o`9v%D(JK@­13pI@^2#f^C¯tZK _$M11aL,a5 l2eUjM(⪩U~kV2.OHOuʊOW:5s(I]"&0bkۼc?Ewve ]Bl1Xw - |t+ywY)l$і hzw~'kuq9 >QwЛ;#3_0g-cx>౹=xxB5>Gj ;lړaQro/p{ a>Qs(z0W7C SJڃ3m٨c A(j3:y323 Rj3I8YI"*1TKN<ɤQV!<|C >ŷұNuׄw/zMy4BF_hbtHHbcBCEa犀-"̢$RLqy)Ĉ[CMȦ^L1 j:^6N.Tu0dl*y s#xtтjNrЍS/jCk/EU'F(%BdS=\F"l\a'j<\}1<{ nto{{.ĸt ͅsU_&R,%4L Gg8uMǭmriG!s$oNODS"aNOtސOȗ[ UYm~M! hcaT$%lbp*76E D Il"U죱{h1h&ʥ.$ڣ;D&@YAN1ZIf$ęM3Ĩr̥A$J1Q1V4ҫ֤ev?<|*fwv2sy -b:Kok<oc4+:Ğv\o F Λe~ӽAbj}0n)❷D\=V>2ɳ: {러:> 7`ď˅4 ^k +{׃zqW?>Aw :LaFKx **GKD}u; Mc} <߆A#\ tJN{yI c3x'i{u؏ _fH Q=>-'x΍1Sv}*Sq'tJ]-CEEJYIfwsi'vMO~T!_!$Zy9{Znqׂ|3Ě,dP,"2l81Dkm3P6?JҌ)LID&6O՘fp7 E;,V0_|+Z"$Gq/ QpN=+f8;nG}9;n5qxVpy{x2}bv:tCNg<n~RNkb_=O,y}w%qU'%~ք~g."zգXi@kmFl66bаn|O~ls Kȴ%di4ĢfwBY"5g]nP{uZTw@YcƜY}g ֔"4V ą;PO 3I`yXc^3)}@AS.?l0X_\0UAV(M,s`wzvd9JW!t{7rwgޯv1Q7Ӊ?߆Eo)S\?u2B]Wil3X痮ܥ?Q⢇V}oBXIO+l\{ a$䕋Ln-M#˃2;F֝cꤞ7ݼ -BH+Q/Ҹ{}n<(#:cTn3{ dk͛в-r){8nҩPQEuLVMk7oBjEK4}s_qu?n<(Rܭ5Gĩj/̃"ZMF& dpaw6K;1+SX<"ED4L*Q`J3nKXh 8PG"LVG46ٚ{P/+RHV74lrlm.7AD8aؘ\rIMl&|<猊?|׆w/ Y:1eᳯ(GņuMLų4bG,p1(pnj@M]TqT+giEEȆDɑ. g6Jm|=$I99xﴶ [ 3|- /V2 yX!D=zfJƓafj|) >NFoQnǂo ٚafs9m)aegĤQ>OmG"(O6;;AY"s L4?X0ڇōxKR2‚r>2,<ûb8_/T/Y͉ H1W#=aN ӌ#m_hXE1BkiwOZ]EGg5]N$NaξTQ<5/VKn?ݝND #*R-R` 9/'Tj²pE>EszJ6~Xۅƈp*[9<:wwY(8\~ߟN @ b4тgUoq9 KКL לbν+RwTO]'$;|e#iB=1yܱ!Z Ld@ۂ!@9 ϖeQҜ;LBs* c wmAEؓ@Sa3$.(8ˋ]w>_P$9uLt&D'gfOJrAQ~?Ko\OgYn A0!%PrϪª z,&OYsj~SbwAwEZDDn1q +ň||[.ȊySMG/HmH vt7 닃.eFߙU,Nlzp8_ %c䢼]\qg p 'dgmma-a ?-Vy"\<*.kƤOv+H7|:$[jIƧW=6c}`PՂBB$P._;=: Lpw(G!D0"PYՉ[[b$P:[O&v2$B0Lqٜ$H5ؽH D򠮓6ZIEe2hm'o>*)}Tr:=VX y1cz:"wyhz)<s,| |48>ޥ KYz[.hMsĂ%-XL%"oe%ʐ_^Ifb>ȩ t2Z)8Ml*9]iZaĐԥϚ>$Jdv,SԂꆋb@̥&uq&(YJ&m&r j#iOHviG@ LQQg3+p*Vo=j.oAӱ26O&If ]#')=D7Gh!J[ݫt"%9($iVZm V~>CUqxe#Ef91l/{.x9QZخ/ ;,ig5ϛ##vb^v+A,W2T(B!ne8 wqci;XZ!Td3 Y.)!>P4Ά\zXcpm 8Ou^/>?f+7*n~ M Ĝb"C\O,I [9 zGZކm\T&قɤkٚCtCž)PCxWptAcw9#yվծK!' 26;`4f2s,4KTzc9|1pxl[^%fK~"55ՙTr$wycOq|: "kV8v %/ȅ?Yut:xɰ}L,>.}y\2 _L2k <q:(1dUz5Bf;t!kApO=kO)Q:U?bW9}9M8IGy rI{ưgDnN?g eإwOi6ĪKyu.DIk Oaٓ\K5jH̞mf 5es:) APӶϷj$U:<ڊ1|L,m~^|p?%]HaT;Q8ʛ}w"{it[bzi|T?+ɦ!G-J.!yۢj7DA OAsu1&pS NUKO4$2Zi͹'CTrkIs4f?3TM) #$DZK:?2}<UacA5gFg |hoEc̘5V @*%C>F&8ŊtN"sz]ۘW]0yQ O9 +V&gmVtS|?Ci闛T'Zr;\W\ԴAb+j9/g3A>@LdDN]`rk4k%%Kq?L]Zv (2{_kMYOw'N#HͶ=k?S5ݢcpĬu'e3t2f3o+u1Uմ&(5S3ZV > ݽ~ӌew}7(FiHl(+7{ǸX ]V<Օa Z"u ,RB–By)`dNFqSp¯*|㩯2xЄs|jC덵S{ݐH%y-nh/o1 HE17`?Ȭ.@|BZ`D5q>{߻2fDt&׷*e)'l8u5でfO{BU|2{.vz6-fJL&Z}4ULr;턺t¼lCsH?jNY^YRu2yG;w1PT:/3+Ea*U4vyR@4%L>~)K$TST6 +Ž%+5,z"`Pِl/B~mTPcL9DkBlaAosL4wiQ܌+JSV/UCibGqh-;NBzЬ2x@(g Әh ϟם<x)6{"84^w}?8ǃ*r OQTϮ^c(GK״'F՝ǣC,fG(Q>{Wr9O@j]ҲG/ ]έJB+fz&!|=\B^C-"4كGW"~q4RW+X`hm|Ւf Um@s9꫔c᲎~ŇHKr Q=AVp{>ڼw'1a"dt;aJZ.f ^ϵPZVרG>/vNyNpXe*|IJ1!~hoG"N*$g^51Dca# ´bފ™(ԅrK 7M):ZڭLoo&YĿ}$&ŤP*ƃ0F:$Đ{u0R ,JpRB%. XrDYN!  131&#&{%|x;? Ñ'1Q謄(5BXQݐQPPط;³o& */TI7@Gf Wi1猣Fr՝aYک8PҊ_`RӈJR5 FP|Qk,2ePGZ:.dKs83!y!!=4 e\v&0(LuwBB*n?SDnBzm֪Di 8"G)*9,-%) bc4+DJx)UU JtTJ a<(c Ud7[p ثWpۻYEos x!45nS.b(eIP 8EBUn"3uͭPXׁ%x58ofwPO hI9x1+unrÍ53*b=l8%Gd|4`f&~"4_&rAO4MtS,=Mk08oo^]өz]Py`wt:Y 'oB$c?^򜾙CL2?qGoW"-z^ f2q{ΐ 1BtW5 pK7,d#UW h4< ߿$Ռڻ^Zk3Q0oT 15rOs*nZk?s7=[5+%4ȫUb jܺaߪ0E/&t)ej{ȀTsJT$_EE>~w?& UT`d+h~x0ǫ_~ճڑxJH1ɩChbsDZFpzpzlfVY;Wn5_ )44rK a%+4R^D`#KR`(TiTZpL4hN\XuRVk]sUW'EL _IODrXeܜ}hwh'Ss{HKFBr1jPWQr\"A_b!QQd<(QM(LH-,EZ&U,9 „rG7(t()ReUu׉z\8d4nj4QSI C,\t`s7fr{(|Fntck}|7"l ;iucɼ+[2oG+?8>"E9Θ䠛гB7(|7ӏTPpUbw.VFKFƷtd{ԗWU _|=XۑŌ`dBX[]|#x+tQYjvq%Qp]$+H{.QĒR*#J# F{FRLtZZ˼Jш`ٶGРvR+)rȈ00s':\l18TqRJʸ"V( 9!8jkE'c]cy籋n}y{\xULKe#Vqd jNZ1fW+L;dhcU]W2ofe`yRnNUT`C`º8?>`2/rx᫷ɢj]^0?OWRX@z^~V=F9-]b yi}-jV6w FnǠWU{"^~(k-į hL)qɎvcZvKA$JĜRnH_\DdJw;NRi#:sn'pΐM%#ݺ/SaMҩ<1?+xb^RsㆧщVaQNLða#Q'yDSJz#93'yLjҰaGǼF$߸&QdޓkrlwW>}lQ8g2!Q=|8Ҋ3 GmsF3!sVn$9fNh/u֭:sT\vç1*asVs U6F%6:{t~q ҲAh4v{ ]H|eiD2'Pg%'9isF0hGK_)Wq;˔ 0Oł#DKk%O?+[r.j9QdV;/8[qKnE >On ZG4:7a*~ԗ1SL]kjcq;s;~ ՝v *\\4rYCS,ZG*Dst1q?ᨐ=:GDst1Mr?'8AѮQΗNj@tԙyVm/"HsJ̏L_/DmTA^s)q8fe%N^At5'+q$>ϹK}cc6DKEV:iٹj5"'q0:ɉ{Dt:ѝTC#O撜فC}5J0$O{Mԙ}O//WP$1'^Tk殱t2UٲVO BHMO MQ>.3IYDMf ~s|i  o$˪ekyBU%|8(AW< dZhCUpa 9jKWMcg_&FVB3t2*l~m:\ǔCߩ ,_b;,xjwH+ϑ#_uHמ%QYv/㈥+3)WXvjqvFzT Z ?O]ſ\ÿ;O]$u>0}9W1}FjNFU17u1IT4 ])x#V駘YzPJhph^`VtPڙoˊ>I?QH.1>#m~4*ꝁi읙WB^l۳M5!Yek@cowo,*yoQcj9")Sy^EO!~0݀DZ_ RLY]O !ɌkY`|:l˒hVRW'( QlENl[ܬ5$B8jtiCCnXBG$P-5S~Z@;祳XzRKA`XL"*E!) WxbupLG!^kXቄWibK<9iNIP֜Œ(* fث…Bg 3;0 -# p'miYӔu'kHJϴ;Y4`?{gB=b#'dƴK*Acvh- TseVmrS7v9n0!:Nnwkk);$"puFJ̵F# ZJ$(AW3[X"I2,HR/SRnBNq37"?U—ZE^PmI)`?>I^^?~~Q0^$9|ܮ1!5 bPga<M=O;@Yb3]劉V 8$]*%|/c;ɸO"iʒ~dbR#[Ѱ`]K|īK1];&EɖI7U+c,W 0_F2:j?۱hKo9~re"ʛ~~NSl\A+-۾Is|x~6'E󑿻\q>"O]3%:X+sZ|- xPVk}GWoBG"FvH=h\ !{rp"Z`ibz@C7^4[)ICI١}BbhOHLSHEa䘠 $CREqmݻr`Z 8{2Je!;5C,Q|!bĠʫ( ޠ-j#!G7#a\/1X;8-PutX\)c2JdR-CŪyH!H >7QZPJMzI&nбn,9z[7Άخdy0,`( .EwlhLEm>H*\|N\=uEdKks赺= Z{Cw]Ϲiv;ůq\14*傪Ugh@J(iBYYʮFoȒ^ h E\K6 ZSV{ŌF(b-x pIC`VgV{vC}ꋍvWaTqkp"\4ܠ׾ᮽiY!YvA!xw(1J'v݃QqE-hd//\Z}5=^X3ѴI'uTLZ+H JJ!2)To# *E^"(\W)f K->T`1(t=# $ھ>KcVfZ͊ɷkS пb1OQm$ 9/DŽr|ABwA+ I.(/ }Ж$N6Z$w4Y-ͻ¡ jϜ.MJG(^ { l'x^ ޛ!+C8h&.T+-$2PߒS.r|O!iAR$`#|:r1]oT(R*?Fzl.ep\Qm|zS~gg4n}R{jl&٪MU4j?r ʒ4<&cSN!F\|RQ 'M*0c<Fv9KݣG}_t9Zub:jjFVfҡ YBie[+1j$>-u̽-#cKMY^j=X*E AbHi`W-V,bK73BJ6 +mgh J{MEu6uL[M2lLޤ0=ب5"_Kx_.3eÇR yn}1ä{u^]8Y=&@7a_3.ϒH@3-TDRɕU\~TՖwldI6xȑ4GªAR:h{ El6@4PRUs7WGk>d}V"zЀ90=H@OQz>lwbCh78(uj;_QF$ SuY` S}KJeWulޒڂ5.D=7uȲ (D ʆ +RHSJ>%oxtBd$?عC|N8/1_ͼ\Ao| ;9tqsB2 )N--G#38g{Q{d+{uv&ZF?u^{׉7P=xSj X%ਥ)q'  2D"5Yc̈́ !@b73w" Z7ίʝmAp\?nLbgN.!'<9I OrNx 7Q ѱH&S:TlB8b h"f<0mb*}Q/Y6ͺӫpؐfeT3uli~p&do]IkJRμ ΢S$ʵJDz%GТk%],}_K|C:L417`3tw!:K#KUDK86XCBl<(RJErG$$p>2AӨe)4zϒhJQIMPQj2׍R)^ت%w/yzO|<|jl7rZ5R*FVʄOnj"YN4@.z| Ћ^}ZIYÆR ō$ `9`Y~)YȇPKQAZfP{a˺1%֛dh'oчt~.Ȥ&k~Z9$?ޙPgVefyG T0{6WsD>>>|S[!߮#N./u&+Bf5o\6a_St۾Oi246廣眏TDp4şV!mcEbt_onpLTpD Ę>c82&€`bwS0r({ƮAWlكdvnǸ@QQz*:cz,ьR*;>"$şwr,1J]efM#NJ4jֶh61V(xl,2d%&w[uu31 ﮭ'ɥUOIX,M\H^U7'{5~bOw1ƕSŧ_\•nڨDx Jm>}?֭^Umm9k\ZRꭺӬ_<0e19h\y LJCTFXDI2 薑WkO2f (#G ?2p%MJ6ȥ e!?OJP4NZ=jcJϷT=yR;* B1A$3Sl#"C`XObSAF(oc84LSuKZmۘ2'8[~nم֖n~O/Vv`C.o tA2 (ji 8bV,,.A_>1=A* هJ~`s0pa8rp ?̍_GxV)w<@PO`2u8SJA)1e݃-a42tŏWsU;/gïչx+|]\ BrxkbnFt5YQ(mB9C.WTЃ{M__"_#n 6 b~Ճ,[g%eHd7[/8katfwb8r C 3<\sZo̸ܼ#U\ **ǃ$ [x״=e<.^>L^_ۻ2ضmlj8z|൶> =döj1/KNEm&1ۅO/-xI~UWsSO(ꀗyԊ-q&́&N𺈫tZTOռnwXfϼ-BdC7Z>kC}u:Tm?QwBHe;/O9tCQT 5f>:w!,H}#SLCX2 XǤ ;d Ӽ* ocА|K axV5vfsZj;ww[跨 O]Kºur~=p1M7RP{ñ囫ކ3Do&?-{yzpg1K|R9vuxb؎O[q͏ Zgsq~>/kv}0r6ϩ>n%$ٮB}v]r?hɗkJ8%d% i{p./d?l~{g _'rpMS-ؽ͗bys7S7o<Aw]7 NO~کE1VwvѾ^{tj$N;[}'f XcC5@ '{ "p?k^b&oj5r:NNFX~)ᙆl0 *%LTmtg`.oezSl-^Vo҅&itn^Lhk󎙈nY9%YGLGBd8'[7`syw嚶v߁&Q%PST"'P`Ea$Y>=_ *.!HC&80.4緱 '`I-l3I ͌r2G窭5 Ŕ4IQQ&ɓ<wU2z88Sn(p,0JTtQ a%1.q@n5&?W"b~!k<$?# ,\]#ɱ嘱MV!+0ƖV^!r+Ak0R SD !1@*5\q|,`9s}Ǡ]{;fNi~}0~?B݄k&W(<3[k)CnsTz3b6wꈖVΈm! #}8X| w╖N! ├*V m^/|Y_;-s3Ywi2˃p½VH°SIԸ1p0Z2' !hpuL#EwkFi5g4Ꜿ KɒC˺;[đGAvYa*#{7fV*k S0,:4.fr%c"(v"d,>ղTkEz,>a7<@%[/ cY0. r>@ d&9//vJQ>.`k0ыHyA1_V:./C~[(¯`Jx匾DTh+ A Q&eDP S4%!D>W(W3D;CF]#$Lbe>}޸nl.:E~ cuFKt 1M ٔBOh, i^B)Ht"bJ* AV0d J_;Rn H Ķ.2֤ s ,ZR`SX!hLDV;:Rj6%Hyތs!bNI' DI 5RR%q˂H7  (R/r4 0)@֝iZ"eҝ1@M!v`+w$N,D~HoTcu'v~29}pMs)] $9SJPb-rl)᧗.n*/YWBwJRH ӢO.|I *YP9!d%$@"kiRA|wӻ]XsfIꖷb -2 Jŵ$i0RBU`EpH_.;R0tZ'5C/Hu,nK #] ( IX)) 8N,I:nDgxΘ˕r~3-nԚP!޼_"&+Q%/&[ݬ7 rJʹBiDa爓Y0'Ynsu@ie#At*E Ħc5T"gsQ">h9+} )oeq,ז8g5tUB Nfѫ#Ab$#&[RB }m3PC _ÚjX3bpw599+yyIN5h_Z5C'f(E테#!%նZN0* {b .t1n:_dո%>U$*-+%^GĔ~&mm~:"Y$zUGHU^̻], y]$ Q^P''?LB-ϟ;*+_,m}*E]hǛN?f JFZ+|OW_Wk$Fw0}֫|{W2eʃ^s*VTxa.^PTx{K4S.gH[IU0HI͠|6Uyba!2vD=ר]{/Y&jJ?Ó\ǐw "0tW kU=5v}ﳵ_4=H>#ڈEVlP&&uxN d桮!^oN {ŚVn3!}A 2m+߹Jq/$" >d9W Q˧0h)wQq.3sZ=l|{s])\~^&ŰӮn=h X}́Ã˼lGߋfnj}쀁u?>UZ`Y퓫ՠ2d_¸k2%TM`!I  : pǜSf~@:f$^N1{ExwЏsvz٠hi)D95~ЬsZ?F# xi;mkKkϦp |lC[%y0h{4RԠszG/eOs۬?9jk9uKB7&6gn0&-NGnZ$bKdL`3 X*]Xw}tO>W`_C/ $hmcלįba7O;e{E"8q޲3dh`:ͣ l!LY9;2 ?Y:A tT!+ Dir``ȟ;+D99r MTu7+ ,vݬJBWfܜAu6+0],@X) 4ڵ[wnڪ1*i[ =,:W13eM>,58F\b.{ww8,ئGλwϚP>-}r?K;f'C5M+0_l׿op321QP8 R&^#sKΏr&`&Ƭ?@fbcLNbĘ&3Q0&CBXQ8?ƀɦ0Y!@`y w1*NrP:O<}9>9szrB?Fܜ_Շ7Q ktJm4+>k՚s#ɐc prh4=4s{p'^4Z{܏q򜀜,WSBB(mݳx xj}bS7K}}m͈ZCdݖ[ r3P=Q᳙`#fz1qaw?|\ܔ8lvۏSy,Ji1aB4@FL1BV:JxY9,1[uQBR 9 ae))U R1Ș*p[mvr&A! *;$RKGyQ4@ʩ. f ~oId BPL [ 5!QY!2@6QڡBpH 7Yf0T5\6€SR>Fh 1`N #LPA$l),R).ݦ(6`nX tiYKkfWRʨc^1'K̆fP*%!q!*J+"}Z|9Hpc t4X%YwKV0* N'R[@ w΋uX?`QYooՏ﮾ C/Wzo]뷏Wu~b' U2O5L8*F/Z3q/V^W wh|xR>L WI/fw[slzG;Y޵m,ЧR#q&h~9.r7։,)68%2 j=(<~;3tep7ϸr#l1Ng0\eY30O(rJnA\JO@,8mNΝk Ͻ5s;^oDZQ?q$p>6莛Rcȹ.)f3OeZ t\bg,lL8ְ FXF]# ČnUO/_,>YXP~~` ~~db!Ű~_,Ա:O/dbi<ԭ_,1=YXt~$dbin[X2t~ddbŲ'r,p;%zvc0' ~"Qq9R#'O/bu~>aX[X!~~dbOwX/V_j‚[5VNsw"T]k>r#fijҚ5 pY$=4{aAes0'99&W (?[jӨVUòS~Xֈo_lшdj:+(P"Rk`Ú3 -{->ù9bX۪,WO:yfy%)Ϥm*RO ,s)Z<0^Qg#ٙ8 μpd-ksP&3"HX |eMXDRmc An,Cadze2lPӗ݈י߭3$w֨0f'sTaJ鱠o*% [>XrNEm_)n.^S}j73](iEV(|,ގsxjSg+ArPBإ᳿yaCeއӏ2xμOZ ~cu=-j+ZnΓ&r ׍tg O K[)TVTS%kG.oEu&T F};R`5<v>A Tٿ ܝVED'uZ wӆ ՝RCVTc(;!7HwݩfB q3j[~S-X)%wI5Gс]sbvrpK[;܆":KΒ_g¾`vt !P͹?0zU?'dAaB[A$X..7,N1_`+A`3x5Q.*kGCN\EtM[<-MT'vmЭx3ȴITK_#АW(u{ 8~˃&;6ݾT- !!'Q:!1LbyDubݺۿ2eC:߆ֆFwݤbn[.嘩F19V-Acu|9fFs}MKA0 #>[Etcs̭Zfr̜asٮO2̅F}1j ${s'c1zmcsmX,}scn=Є9>ܪ%]HeY֟qs Ǘc>[嘥19V-A#cV>۴E819V-1kD>>ܪ%hQ9fMO"ǬǗcB>瘿d N!\jQO@oVѫSl6OǧM%EPAge:{?MM@Ֆn1]:;nPtT20,,eJ橆0JpL+E@!YEt9|snYotp"w6Dz8;sGDb3EPP "r„,i pBVEPFAY"+ͤ.ހeaNEX31vS-E6a0c!y&aAl 1IbYg*ψ@ B6ƨd0"Qz3gdHяW @Y&@31.8YXMD9BAB!Vu"t$ X(Hi2J\a)ΰʫL``&R\pO`@C 1ˉ&!«-J/UN>8<5p/Y_d %Va23CڧR`̅U6)Fq6Z=,FM0Z膓CB9.5928F0,XQ@`8/3ksKXFy\lEXHP1'R,|,fºK- aML7nT{odq}33 eΒM, &*(]$SW2ؿLTs,؝QO][&ύ櫉U('x:o~1l?ôN`4(͖5C[,U}5B,kx&Bs֕ ٷM&(_w95!:Wvvs3LZx6Uڊ~r&Qb{ezIUptAtn~57|yldK&ܘEr@IQ"͌֩ey.׆{޻ wg?9mPh(<5ֺ"MH,h˛cpNuL_lL/Lw=>h}`~`{[~s&U3\+q$!H!ze @5΃|=O+{igIk'/!aNm=l!,T+-5Vo~ym}‹Ltm?y2{S,jrQNV{IyTiӷ_ӏ <-@oo%V-dӫ&y()skAJQWf٨Y&Umx299N+&"NL<}Lg_J*[UfxH2&#GjEy;C0/REsc Ra oGYG|c0H J^H؛K^Q U&Q2KXrdv<ûf ]=h'О4S_Yq9EY_Ĝ=b.N}9hڷ9&W"q d&=@Y9qͅ;g9#U{ D \2۱#qj0_ 3rOƙ{:Hʿ9 lD?N I ~, .D}!hp=-j+ zN޽Y^]Iy&]nSU2}j5);`~;8mr Ѱ0(tLoE@Cbjj1nsJ,p\ f9`8b0RaCDo'54 o)փ^JK\}؊ rE(9!¹*BUxt3^kpӊAin ֘۷_~dj DF5U5eЩ]ҫctjUdZW˳n \ΟF@pg`ѵ3uP|l&ΓWL]޾._-|r f&c{nًQ^d߆# sS!HPөLAhf0&Z OyMonJ >D! P &?5fzHd:` Wn6 _pu|+ߌ5MBQ-!9f^ ׎>yy)^ʣUm4W! f!K&s8H[-^9-G؁{4U8k4Ά`RVb7<-,@mZprT7b)Zb[PRԗ}ߜ: W=Kr KOs yX_\3TS uM\&aν9"H)' K=#T.㥦:*c"O܃S?l89 Ɵ-MT͏l;ί׵wg,7r2X)9 `YI%'R2b2ñ rDCh[>;CQ,Ht|\Whm*NCӇKLU[.[|_-ѾӛgnͶ}[ {彳def1:7 Y`a 6vTbtE5䫹nMd׆Ksf?&?MCF&/'7i';.n8LVpe8i :,k#]u%l }rq]ȇ{G1,Д㯮m'UǏ9JJ6 ]z?'y+3NQ92coɈ[\bgv9/y(*C@?=ký̀<6u,I6Y$bzݰZbqeol_Qj^ kxrV<woώSmWcSkEMbT{ǰz VZRd;Z+HEa NQn}~yKd*Yc5tFPtaV~|V|XUϺ12?f!Lo}?$rm[9!}Tq H(7 vQmD7gN2G77on*yѲ=̋]i$W"Ϫca)h扗1CeVỶ!gA8RT{m' ?F2jC=DK~]I>1ǀ]KC{s4 ;{Ģ/|̸joŠmcp=aOQ:C=ï^_:qz8v ѵ;s'uͅxtMF+3^' wl i< x0La3[3A}rA>(x /(ObCK g R(a3pRJ縛%ɟףuBel5zaZfv<lZPn~o<4/J&Ri~3k7'$ 5Ӻx$h^"Tsx ^9>h+|nvB)s?]ɯv/|(M4ꥈvp&g9eGn e9'E@5\ !xN[-(J 5>DHBKRf,uQ|MKsMr5N#ș-iGM^ ˭4_ Z 0_53ru'Z.aq' Zև!^cZIԸ;K5jP%{޺rMMg UouNjtǺ_~54"/>|ǖ=fD)6* 3N_lPiM],h0|uaNnV=|L|>ϹC#u;yi,WAC".;Qp̉ڼ\"kyΓ8X$BAz6sl΂r 2``#dO)A.gvR0b[2oVAQ2:^ILǠ ebvwd)$` F`6H0u֕8Qa ^$Rje S_(+`c;Bro<eVX)GU Arr<% R@]6c eKNc 񾲼܊B X-YιG3QJ)9m7lRh9{8ITL 'bqHFFDOkt}"Y"kdjB ~+_mX-fnPha3OZEӞ}41a/:{\^v ;a*!:"߂iEAu`?7Z/3TL'`B_XI%϶hhiN5jL+øު&hK,N@7@2Bi|VxG Ṉob30e$3}as#Qe%^"2 $" RRL;_*K. q2D건j>OC>;pY2 Y~e8=]HzFYJ)g+>V?(2@ Xu媼NH&t:aFKt ͻ_2-v;uq=xLkP4ݱ$4;AwKU+s)NԄB&_ mbI.3MB Tѧ+Xvt} RLB\w ^"JPOAeJO 2ssqG -e$Rhc)_WE^W^ed@&vs[ #t7ʧ3$)tb[IByINo_; scGbig%1'kip_%z)pB UY`)#ktt?},8KXmWwz/\I>g 6C߅%N˲>ѲL3cM?%C#p),˷o`MFej1"#棃` +K½>@)p߶L i.2YW^)kmϒ63=e+zgr*\ut%C˦ @* lϴagpƜ}۝|}⚴ Aj:c#3.@rksn;?;yuKvA8G+\2'Oomxz 2աWC=@ K=' ߔvܿq]mג6 徒v.pˠ]ktsx+Qo5p[JՖ+!_,&$7r,4hgs)t-(yfdZ Ϩ, z\VxeNhR>h:^0m|tqs}Di;W7k$$J{̕(aWGYQ>j5kak;>G)Z׭[`F L( yYq͸ph(beCA)«{ 1"w4ήɟkoqw Nt=P3Q(gCfU9Z݄ʤV7o<Ŕ-%;A,k!zXQ] 6E1E Em ] <-_. b;Z[LZ1Ju;u`mKb>mG 2nT  B"t5]D t Dh 8&m6%bC1ޥ}TfOc4fOc_T(<0Hy#VhF} 8 R:' @/NRP!Z/c|c:JU~EF*ɐ&' GFR':QuKUukL=24#Ӫ=RU]!(reFO$xN",h9" a](KV]NHum:儬A#4/ӨL2/U4HG-I@{%Tx!PAPϼsK\+n="NԈN~1A3Ή Rn`I?՜ҭJ1*4ӨRL*EZ|(LOr.p$ZƙZBZkvBQX%Q=8|JN6b}qmOiTt6zSnl$ yV݅`{9xSfqDZ+/XB) e"e !՛`S(i>߅dJMH6#7dJu]0H_FSw@j$nPIdPgZƏ}yYjv8h~ TB~y{gt/ QѸ2Hh={²E(6nލTgz4F 2~|Qo{7Ƨ4+.V3ϗf˛oxlm;zrp\C[0)HJ `Nȇ#(c+R6@A;΄"E%r}z5PRVXV ? iq @)2?ߍ@k#Z m]AZ?S~"{T T, '\sgE=* N @cp T_R&fEKWepxYE %br TQF -q\KԲ59zzɅ'9"ܜ( ~rm P)HlU MQM.tQu\QTp6@4HA[oTK#Gzڡ@ОP}Cأ h# $U+d Epj"E@x-hMkSy.4JoO:81FM{bS؊cS'cV: svZRZp@<p:/3)n ccfЮkBq(YwVA6!h´i0F9x AF hԆJcǹd)PL2Q:jW[} \CȐ ' Gʪ!o]̕,*c3 e+^9X4)OAf?O 5J-YÓ+ZUzNS1@#>sQ Ĉs2L) J+]|(jFc&tA2%m#GRϰx9&prӊ(Ek[VYǻW7z]>T32,}xo'vx%ognb6 , 5yG-S!GGeɵYXG8ng|뫰rWWH+X5nN\OY+4&~pV* C|ӒJicuf+ &.y˖sJD,rWܼ:p86~:P)׳%,qFջꊢ"5zg|r7[=#vXqOlyt"ӻ&@ 3y/%~,n9Q{7ٗPS?j6L2a2Bg,SrϘy؟q`fjB_O8/.AP-q'jmV+CqQi< bmaU [-f= v $E_Qgzȑ_؋ $;d.; KlL`!ՒݺYl%' Xn_E$/RG4aqXvF Ae_=zԲ xy4Ǭ 9=Gg@N=&Q̖"D̄ӽabюxb 8ο[GhCX^)U׻22[s(?xXW.0 ؎@-~%hzbݫ<(jn([HBr{h-'@6[v2D6 О1a-aʻ‡\јCxLPգלCB?7W/BP V…-qB{bBsג4%;JJ`1'񹁐G7DžH)"`H`)ḧaxMr.LYoO.)+Mv5*OxIx,yC3{-̿|lNnz"Rwwŕu^@ƊS??|6Nj-"폹Erf:@iSnoG`Hɾ8/%}9Pbʄr6;/ўH _W,og&'I{qQUo|.6  hr.޹[M R1=_nWZ1aF:^]f5H-^fQ|{J@`$FPœVd5q0 `0*Uu(i(D;=CRjIAaMBR SR'3JҩJr_EUm^b+7sN5Si \ZB[ a"rJ} HBzape swjօ@p\s~l. C٘4=_[z l6Fh GqSglyxk#)!;0XKChc헐#a<_䦯" ^=sa *Rx3q(tUJTiK w{:0ܬaT} t'uwX>O_&)g; 09 \DV8je^ ?7HqJPTiBkhKe>HaD<6gaA-uˆ1|>+FFFV-.&pFX:"k*ȢiܗmQ-_:t!WE_ 3!9ߺƵluŭ)U /U )T8Q P+"Dsa/ ,Vܯ_viDKݎ vǹӔa˴CQTPnX30};'٘oMD⺏K{ l֞}/PA6|*+gWB03Ȕu[k|EhuKfǠ9͠p>i!;?c% +5d?:=}3| I'~9Sjic"Xb_V"no7znA9]rWcddR̔U\ΕVyITvf *nǣ+p 'Q$ȭyg\"B#m-6oW\xJ4z]hNawJ.K7߸Bѯ Ήv[ElX:څmV]l Ad|\N[(AUYt# ρ#Z E@hċa ˸5hTL:bXLG6gzH2_]T#f+Y gA Hs`ZzU lhJ #, Ĩpc Y I o\s+CO‘\Jϐ=Uz)rn c2Yt8:ho0-"4TJZjŊa 1znJ-p%xZN$@x^Sd_ޛ-5tjzgʻ qj ޿&Ḡ~lC('8A,\qe7吊wۉ-> l~6)#VA#m]7GܸԳާ5zn^QeYg! Mx O)"ʢ5.%JbQTT^ɞmƣ u}_̀ Ep1'́ڧπuh'g*zQ*XJ_$=/ !A롣 u ZӝpM%hh_WxEW_v[pa!Q:E5Nk# ծWءzˡFkݏNB#ƙ!]Ɉۻ)Quq7_qbݻ)ŷDLafe 鸅|כL]_>؋\M>jx|uL"|fEbE<ߡgḑLjQKцj cC[x_6dWh+V`awX|Χ:*^iiA˧{ 3B5|e~+ oV !6[:GFt5E__h:/~1;IJ2ͫ UogU͉BBx9a 0M0p{ݜ fsd _!ꪤ>Mb(bL)RN_uPN%gH[]e7"h@ij}Xuhi4L˝φAO#[zZ<]<řFqÜݭw_׫LՔ>S0N8Sz@_oWPAo 0䪠#wxQز¿pdMqî Fwgdcyd &dU>0?vfLxE_(}$ ө^%8 /.(ȓ"/7rj3]q=|\F%Av'M K$U>e$yqu%Iev2LEۑejCu\塯V r=géH#9C⹓6}mLyc39hB_6RG5TOaFSSy |i6m%-&wD]M%qjauZ`IH(` -`Zպ.th*EhB M/?Jyx{L/NeoɹM+*#s?-nfhƈ 7 ~h<%Vx*ە*-z0C' Ra<=M{;r%dJg F#ܓ,K*tI%ST0GxoLY>ݹwgEUx'=0%1$0M"/Y 0 ۰zV K"X13cZb!|+|E+)E Z&/ Ks_~$$Ssvc¬ j=Y _|gMJ(Rʰ/0[3hsqZq QPsEA7Q0N; FTxC%52LiTJeofH<}Rk,stݨSRXs#167(NS`́588,O ((B2eڅ8#{ v b}?L pdD+NIEmpP̃[,q j;-};=82$@~Qpר AzƥPIfGR 3y%1ҁf"v-X/fr7Q3W gy|b.)pi1b2rϤ'YvI:"R=.{H$ M{b@$Od8rfsb %Le_UpCٜHY j/Me6 _p00nc bCDpHB l&$a2yuQ@h1s;sDcře4*9q6E kLtf酓0'=IĔ$ haLs6IL1kzBfWh%4EjJY ci2 * Lb8XLW F^K!db@1'{ɑx~:M!PM|.S2ݒWkaoH8LJC%X\%K~#,v ߅*alT&Of:!8j]I%i%+GߞęG5eVuw[eql3Ħh#`:Ŗd0ԥ=gL1% ɶmx-Yv1,]Upuc#HN"M!n+73(uagU38(}e+._?{WGYlV a,7;/xZjZ>7U%e]SYaÕUL`\ F_HI÷P<7ӗͮ8àjJD+eLt2x5' bj;YM Vm eQݑ ($KbEו0V6hS1ƬKIuA%SA/a;سHw?E)pUBT :ZfiKniᶣޓQ{lu}HGLJpk+crsypaUz7(U֗)5X rW=rQ_ӰH|myDZH20 xY_v vGxycCY< {,F [~4GVhw~+ר]X"s{fQ7;|9i_b_hx_ ١O9ʅb/ +m_҄]UV=X6fQՍrKbaD' Ea+/i|=`20|%-^W 0vy˗@ RxV= 2Q[MJJSL|n311Fjg t{Ғ/uHUJ{.l, -Jچ q'! "ac@y|24!cJ@Z ƌ1Rice\AL+txB}lz wL7G{Ae}C۟nP ,dg';>Y'.nP: f8*4"1-6(HB#HPjO4Г|if&~RKMQNɇԉͮ]V뷡Qھa5=r1=?_z5:FK{,\Q*v-YVKgY-etTK}5?dh| {lQփ+ u@H3S>0'5nz7:Gl3hPn3:/GfIԷ쪓׌6hf~|6n  mz*ᖰ *PX.}YKk娜$8"- #v$PNsLjtLdFh$];!({/٨.7RNд nz0B!e|YVgM4H|&AnKa)*oT8MBs1X{liEL/or|CH ;MdJtuItX DzȃqsIk*.IA:DcV *bWe3Nb4An("!e&EԪP4v: ;bF1bqgvL "v3L4.@g'ص,w4;$}j%jr+ $AM~1ʷ0.c.6p_B*⤮n=]?kTG6g./Q|{OiU9!MK{?yonxp}475Y= QA3~:S |$ ƕ1%Mgqaf20/| 3(b5P3 cjE0P00ĈFG3su.y2Ěvx'1DP$+QFƙbt`y3d2'Yz⢴.~wⷤ1ۋ%+t0ʌ_TFɬR:#e1:.E@RШҌp.#=U,1v]& M^qn b٤؄zB2h6]\Ձ i@#ڢu(-q,pLr*j,MV*$HCl McGle!Ï}1r4v^ڋ,2$qyrHU@ Ϩ60bԱЭ ^UHWg<.8ŠAwW_Ye`FPTF+V86 *BPCI>”6@P ɒ1ZoHBRD 24U)ZĊymiJBca&1&ܡiA&CA9Z0mPJmBmE zLQNJBim P%:,^Xo!0Zɞ opRp0LKO0edR# !y@Fz:pK@yC@6 -rW@48\><SRT Ij9#f!f\E>rTrkF2)(LVjm!Ib7$; (9l7SmHfa97`M53l5 h PqFJ VEنjg(d)(ׂ0A(1Q36Cp، l;ef!ͩ74ݧ8>~C|F2ZHbhP4CUiNpc  zߢPJ O?R_RO滘.Тwa.fx,o?ɇղtwsܝ/Wwi>\NG|˹0m Ivw'k"DC[+R*@B=$ Hve3=eHXMwحա =v1H5kEmZoyHes";*f?ytVRDQ z)%˾Kpkkz%U fw?j#_RqXr,Kzɽ)a@:25 Cq-!> 4x" b{M;TvP /%tH }n"$POqo')uX` J2_0?{:{|>rz[CLxwb){4 Q&\VHMv7v)𬚚@\ЯI0k (g 4݌N rFcĪD5U! 120Ծ̃ j@՘3[Dj b H2±@q,Ph}T04sLb ?l Ӭ:clCHQ&ˬRǧu)apᄊX64l iS0C?^_h,L> Pb\yDUߐ$0| b^,͍% '/כ}0ƻ ?F?><|^K__񽓓/>^\>ܲ8~D2DpPF,~soydAgCooEQ闢x)d|QBc,`w6~{;=Ǐmht^^ AѬ38- 'QC};SKWrg!P*Z\\S~{uZcw cL8T/M5tZ o6~Z5..I )i]]_\=2~n1VcqNoXp=?ī^濦('KIh]LAFcR7J?#w5Al}F̥ϬD5/L7{|epNi|֌vnu"$BuqۼithӺ#ILpu׬=O+6Mn˽Lr\*H+aX#6~VT-_ a#]F8$%#t.93"nyHn3ik6;KQH7J5)]2k0zIUeni #eya(' Z9p/zRչ{cY>)%: A L&WY֌x>4pR[kokEj$5ޔ8`6HDBd.&%lnF&9r-vYÄz]oojB\zPbJ eS$6ő01I 1N#se3^eQSS"2%P¥Ijpܘx)$h팳ZZ'Z@^q@"+xR\hrX0ާ$ۏ?F6S6ThzbWI5{PBjV(KRԈJ*TBH4Yc}Ն-ymdߘ\Yo^D"pVXp Ԅّ#8n^f49ʸHsXNRT} ٻFn,W I^ S a,^lm+I'AdT,JN:i"ys!%5;]"%ab?6f 5WT_Y9IX#_ž iдfˢ YO |:\Op]A  ļg94F<ߊxFSKfdAi=W ] O&wUQi/z-ϹV EBbu]@s":kL*4NXPvܥOMnQQih=gU{#Hei֢,x#S#6EjKL?uU e4] ETHEDĥ8+L`C#m1ヲc|**E(PqXKwۜMӊ1y#" 'wsSvfB62 (,!c4(|"P!r猔Z m  " i7=SRujk-OjP*3~^p! UvRl!@muDRk[O"S4ڀ#x2mן) PŒMLID7) E7^VJ=;έ?Q 1Fk^O!H@A|331 TeP a}+rt͓m38jmZK%r$ 4$ϕJtMr)n3JjJ.\7,',\)1 85iw]$L1L}5V3N֙C #SMg]j*6Cud])9W(] ܔ0 NVD L 3#KK VRx zigj.D bgtY󖃍wHImb&fV[uq羢ZAiNz /:BmB 5U)-?B.?٨ LW]("aM,*)Ab%o1wtŹZwIqtUxnf}ZJ}+|,K1f*zꂼڍ~2*㝏\Z; $@83Q hNIq"ٹ-N xq">lIB#_> ڎڮ;ƼLzjYŲG~QY} rSˇlEZڠEvugmD~m֪JK&~v* UX #l~{7/4w}6wIKSg$a}|%Ӱz]0B}=Jsv;UM& ;tJtt;uxASuSϔn@gNR-u&jzҭ+ ru6^5o=8S!9Dw@T*o_@/kt!yH7X$I#Ts̸|&V:`'_6h'|"%0 7grg,(y3kޗ>|=7Ǯ=|{p5߲ O`o]% 边s}p.nƧ8Dr44kl~\lN`_;23O!\FIVB0O<:JVɸơ'h W֢yOt&Q^Ig.@~;7Z@|x nWs>Ǐk1J2K_?\:pJΎm7CVL;+Jc=G]8jy0{vt8?r/tt5t-bK3S 9Sv_:KtM{b q*_GӻIJMS^կ#?ʂʔn1i4>TOw_lq{kӋt/oRGeI_jI0NIvf ~A8oN5^z i՞m#@Kf7`w@b/Ԃ(O=vS^MfVZPBIڗYd1^͔ ɢ׼(Ԫ<|J&uiu,!A 6=& ԋ/-HKήߝj'7|1ySbYf6=Cƌ2|/.E'䷘Fu_]j2rݗ`2+ @<_> ,\YG },|h4,\eg5"g]HLg,])}o<ڍGԳQƛxCaR8C2'plD(O@/9Fy0t*ZY(࠘/h V ([0˲4v~dRh*j(IR3%A1F~5%$B x@2u1j2j9!vGQ-TFF9b܈"'% J#|iu#JэxouuZR"쎵uSu *0㪧ɾV0'o$c_,YJ@vR%s #y*n8fzz0-Uv㴉\)SJkGEMG?i3 iJGo!Bd;FC ŚeRSJ`P 7` !Z볖 P=]:~w&ZwM.g-#T|#\״@rM9\Dp{g-r#2ƞ2\ZpT*ƪO8fi ߂8FT0li\Wsgy[Ö$5p鋽ldܷڎ\2%%۱V]W7fYRctyL' 9kQ# *uUQm#r!9D;`k`{ ?@6{ʳ?(-R%rH)>Hv:ZdHF% 5Ne'^8rЅZBǜc/AS9gثDB Z$nA/IF)OIqu}T;ߡT_=#Nqf?ߧҝTX uh7pUr]TBJ3!zQǕA@M}Bie1{jXpAOWj,tׁ.K4^Y@+-.fסyA000Ѽz#rq ^gQqqYXT XΗ!evP!x\#N'O D˘r#`O"$E"i5350$ (ʒ8  r"qR2eE+tDj#\ YiNp"r;ڑk `ԙ0= 70W$7V .b5Tz oQiRh(ZBP9kk.6rܣg?#bEM@sl Əwɮƿ,FmtٕYo!a @jM:J:͠* >_|V+F4|- 53JkBmY7Ӗ7ӹ|=T+Zbɑ>k9%9(aRtC+00S<_ D#{S %'D?ȴ7KNeOqH-Y`֒d=AS5kEC\ [s"f",\{0[՜ccN3P7uA$-:O qZd/9q/} $F繏<%hR KTki~+oe٪EN;S.4䔼HSNHL Z]2eeDPY#RxJ *ju #Uf2 VȈK~6~wPXg4wg`of׿eepfvYF*LN 9e|Ka[6,<#,BhkY=IЍELp~]#GWn\Lo>|K[1_x囏|3xkknVE嗽eaw6/g+d锋$@cYH=3߷AI-xL[&5ݍY1k5[f8xgv|9DP`\.U7_f^> ^>`ҿ|8spԸd*+&&F5{u䭊:bŀ-o+An3*)VLe "rAۗo۶>y_q4_D2)³ZH,XOZۮf @!@9EӲ0*[[ ;*2 ]VuIBEI0cbASqy`nnܙg0}ڃBrjȐ%-z\Q[Nr^*"g̾ރ3ɺs@'GJǝiY|< X4gA=? A8IYp1NP(2k "ꡫd ;Kh=&#7zb@;q~xҦ``|s޼GD9Z~yP+ óG)_R|uh|`> B4f"ʒUR*ךou o.MCVl.?G$Sif{XaߞX?u=jTIْUx'1ykX M挃I)|q K4XJ`1.xt3V8? $Wc[F2^EZbm@MFۀ+w^W 0^ *uICT(WyO|*%–B(QF)dK6 / j1s"et߳ғ &"P=ڼ@Qyc޷`5gBX ݅ųW S` Ju&I-` *+ΌP b"T€g)!scT`:VhO&H;nl`ong=i˘Z6G)ܲl7OVy‘uWHVɆi_sCX0#}=E6k9a[d-nveO,ǾT˱^5u}^_(><ZB҄{+mὋO'Gq2zQOlm ;\r7~G*X j7d%}uR 3&s `;pFn 1;qRrM^؜c#$rW(lU #%sKteAO(wZE]/?nZ7<ыI~j񍖔N%%{<|;jʻ"¬㪃j"o^)esjD=[IEpΫ&H u;U+ײvmke`]?wKÅz{ШPZ}Q맢US̗]l]̥~z#G5MX}{n9,zI9Ԁ$ jN~xå=v 偏脾v;E\ Ixj>$ jt8UvH }GvL/BcDifڭ ELiB4y_Zj='8}I`2}veC>xqO/G՗.ut|&]9bʵǺ*n [)Q8X)' J\5%[a{mX 0Q.v9Gg>NRǽ5S) "Р"]ي/MN9G/OWK?I0#]ElA-No>B_izJ +56jkSdœ9Art\ۙ92a]&Lz}xW=IIXs;syn_FO-\p{OLzqlA“>#+hk-4A'm$^ڗ2)L,NKZdL1D(}w*hgH\Wyni6lRB'x{|(4XGnu{ahI:fg. { +pȽ~X@+\7:ODQtJE]g1m?8Ɠi6n5JqO?X 1S  dZ%0邚E?Y:9ۗWP\ᔁ.'Fv PԎ~ #)"SM-c_9 {U<2cj#dgOQ.X!.;Zpގw,H vCV-%;'kSBՌl^QDz\H1Ll4;h+Ya*J[d^J*Tp JVDrJ x#˞큹"Ȯ(b-˗->+ hFVi|7W25 Q.S><^53^t9SWN2GC0azvY/Ƭȗ3YЄ֏D[H8R}B˥F$@HF Y:JaBPB #}ҲeAhDƠ)ֽ\̞sGrr4k?, BclKA>=ݘО転 A z-ro/0;I`}L"dXd l 2k`jOHIId?juc8S 4N H7-T>k,,|MOx-#AeۨLز1;= ㆲ6 DJt`a|km}Jj__=b0ܜ5P2y0RPEDWَwb#㳀S:J4;غO 3q4~~0.0ts6q~2W}﫦|7#.DQʂ]FEeLX Xcng‚=8EAC_Y5Hs `yzz5xE"Fqa:~*Xl9]![J$6VUG%Br1תPyFyAY.&%ʹUwQMUԕ+[_@~.HB-?=0 Kg"_^#vZ>R?s Q(tV?~[P^ADdgM  ~ۧ 88] /u߁*5lHX¿Q>ײo:vA!sw b0Y"x rdm{% 4ba}%o=S EFHg*Mtc}4Ll["}/ϯ[˙㹝:+ |y7@[Su{܂Fwwjml` 5^}rE_oAU;;^ϊ}&Njsn7!>rcϵ%5h0/?Q{ =\]kFIcڼXwqAKMުpSw|X1`KVVfm0Q:S@#LgTF "ǘ|b|HM.;yODwn!ROVm\7Lzsɖ Ty%JC0!*lQ 2+0yQً,wIFߊjj٪yJ8iZ8m0C0cɢz X8^?m`A=CBKs-B[MZ3ʒ`Q;_ ,H`s+lDA%ye"*rdpn˜*٢Ma9U r\bRπ'UET Ra% Y0 { ;ӛޠ€"e>%,LdFʞ>b*%s;ߍ84-I{_P$X>Q*A2'Q#[:@8z6& G*&HfKR!ZfYU\_߄b"בJWhZK+TǸMr3c|1,(Q>.+̾|'fDT'hNExгV瀣R˛J:,e. WԢn[^䳈E.V/撊R8hUKƲI%<#x NNPe5@9 SĎ\l %(H!2ENX)vcr{+ aaVKaBFоlSm)\SvIa$nNf';erZ8Y7F8516MRsrp?q3~^*OMV=k% )KD]qLpy{PV !H L-v*$9rQM9˳v#Ms~)KnUV2ƘLܖRˋl;V0}=FW[-&a%9 A2s-a)X޾mVy,)osId}|ŏqNq;~\haD\;b1ihN2U=϶LMEN!'$'d9zwBvpb8`IY6/baLB1ryIT wv8YJc9K8t,Eu8pR#RpwjґZ by#[(J ʤeƪ61"åvkp.vcHL -ⳀZ$e Ō^?"2=uHᠷeX/ |}Vpߔ.X1)M7(Æs8sOaPvZpX¡91A2Q|!II"$wZQ3X")ʀПWߪE E. p@׏ ,^ŲW<@7kzv;ZoP>$@8הl@x89H y5]?,kpg2\/0xN/76W;4\jz"Fy*NIXn0ʜݸBIXq0<\%@GFl%xJ3I>š銛 뻇qK qQ"ehfчcccccZ lgNN<8/ dflB܀FD,FlMZeZ2Y"g ܏ )E,o]{OI*(L֝v&LBfGQdClC(}Op&3#9UuN3HmZ5 ؏Ya*ZU{:烃ZTxu5BNZp>[F\7C4tقmZ{.M^xC=mZyج#5) 𼼀2{c#.F0cGD"( q:ƍh . %vRk, yh)UUNfqH2B[g c\v X4HfYkM䣑\`}7JjR '}!^qȥ>c滭l44ԷD([ђ*X҂:y4(} RQBn/1_)5VáP J$gY5JXLݰ?:Pϗ+#xEyծTqV$.%Nej 'ɽ} @!:G\b,M Oǘ ~xtC,j*bxnu6jjrԲrY T CÛ79nF+'SBRڌBS 7}v'i¾Bh(M@ 7Ӫgp\a5N8%Qu&ɹjoyIj'RY )LU[_'?3)Wm ה[)S*_w!@VM&L Ȕ|X b1GP]k(#'s v$Lbɾ M7b] L1$ , fK>РEUI[\|c'B+MntqqPc{ ӆ5VYr-M[x0ncwNǸ q*BhatӞ%Wmm:vۛ86݄ͅL%3Ȥg#C: $tL1* DܦOm&Rn8R 8RY΢;[74k—7RgZ̔CDܴJVAQ3TdҼrVіLЛ*(y,@R&GҊJA 7 &DvoR# '@3?^9!<}Vwu !umʛ,PUAkWܐ=+zrќU̥+9E +/9圑Ug]8fWf9kLE%-l[Ҹs q^K3$\5"3ۮɸ:4&8hrT1Ce2ܤ ڥƘ~ԕ-`J TFeCxzW8 !"N`\MP{@2:y^ײK#sз 2qxACڈ^{$ *}2cB'OvA>:tUC>pvCtIEy+ f<]oMV4X'M ` }R{dYGVUo[F?EWo=\QW ZDR",T¿MeT.F@Lo2p5 y28oyE3Z+Memw(1d ?ϣ`A'NUIݦDEIB8 aRd0(G!ER$"]P67<3jPK;FTkg!9G^$Rb].ge7+YHYH*b03U0Q:aLD#SZ+)1zE`BJg!T+ Yw嵜F*ToUG9x"6]N, m! 7igI:ȈKAEIBI,ItI( o'6.Hgd HL{́ek+of}o6N5$LIÞ aBKÜaz:0̹!V$^N#[rΓ"j 8~#mJJDŽ{$; _Ɏ[Gb T`YRmT9 :_gӓ/5%5M0w2 4/jLkJL)^'M+-OdC娗!QyV)C&03y F^}3f].aauc#.FZHeê̙U^4,pJ''!NjgOQ,""HfkPRzg2;/R4%6s7|uE&A{Vbe, $#sIamB$$UԋێJ | "{U]|q0 &'1/5ii-uwIE8T'";E%z3jɳwMa}]bM(N^u<;t 7,XEJxO+1#G@>@,p_! b˯_qjTUVR/xdБFV\f4D0-:P< &v$/On xr}w~q:O 2n(GZ*dEL1yK;&a .S_Y2E 9MǔCwy4ﶟ FGS17B0cu90dC&v7O1GYzw͗ ?m_}ͳ/^Wsx|ۓ\_f_ݝg;/_ڝ^C]{o^ޙ^!`ճ__vׯ?&(>x0Ћo>|՗7@F_op#F/{o>{/igRpFeRfk'ޑ}'dR+l7_7)̏CJ3B;eݘ 3y2t'>us`Asg<<ۓa3;{ G?߆3VS7jz=D!M?=?E=Yy&ԗ&DslV&.'?a2g"&oqi3G"T/&չ[hއ98gN޼f<7{ϧObl'> o+ދz5D o3|'#ayma`߆osmw*rk/{)Ͼ' WŅ` ~΋T>b6ݽ@<z=yT|u,n{zz<ްwH +[}>d9&o9< y][o+iwVO,M8 xu&Q$Is[3ԗ<8lUW$~<;[}/.J SMǗ,V [gIBOK`o\_K uחoHe{Ӱ??w~O O0_(_b>Jb,$t9sŠ ^_)߼~uNqtfdomJ=7 =)Heϗ3 Ҁg d`܂]-d>AR`o :Z\S#}{xko~U1A)TbOkuG(KoR@cV hJjpD 2,:ovѠp+9qŸ}=??Gig%ତ44ZJ$ m`N%gdrD MN\ -kslC;C;K:q؋f|k$`P0˞@LH 5F&9CZQ#lQd[1cĆ$$)HRW$Sd LA)H I 0IJ&!oIJ n^%3ـ2C|#p|1G$0b({x0N9Ŭ‰qvDiAXvNb_=-b rJw"z_Z"@ 7tL^˽f gER[o)rG99{d74<;woj|m(" b(B @9xlA;z23vNJ} K3_վCW}}I͌)gdׁLPoP>D3kQ3qz'Ho)@C^"QԚEu)ژ!B)ϸIZ!wt1y!dK$D (E}NQStNQ)C)F|<;~W C \x9 uKy}9saJ7^ :| |0&ꙋCkIɒ% H 𒶊Z [Ѡ3C; .\-f^FX0{V6-Wr[MIqc)H.n*gbg%QIJXĺ/rXĺ&5umuH***ZB,+ s {Mږ7P Ўxr\K67%=Ry%q[:T8oN gLO.Ǚ#2cUozv-M6_r*l)n)z6vN=ݦ20ScwzsRYQV֚{EƂhZYF0 j[#mք;(UGz]];m#CAݪ;ARqtZm}6$[ף;홨$QJ!L/SKidR x+ Q{<&v,hI 6Ad~Vʒy"M [acw&uVDj^:xc+mecTZRFHҥğ 8.BQg,"ͬis _m^2X)Xe%e5:cȲ/R=tvQ2#y[Nmʐ1dᨒW/Q66NW͎/GBEGҔb _/F\Y46JizSq2%PK7%O-M4G+{m{`{4mmd?{DqY횵֪Ͳ6YD=(hmרI&B5 rgXkmWl7ꨍ!6N@jy&n]I^Q6g>8an-'iRYN; |w;_/3^w+6>eK?\93pVm:KDт u&2rv8-unT!6Ҿm.NMoN`?7P:9!70@tvTSK|s3XWBiu\h 5ݙ\vPR̢c i:ޜ&θy"[ƥW"}d _zڍ-|Θ9H=|𷨏pf~(;(w>e1k[( \!QRZf,ڊ>- <SkB\&Og7'2&F͕؛(P7]ռ[BUN܎}aʅr# 2< ٖtS^+^8# ۡ?qw2 V}p _v}<:k]E4W42=1].^uW\BV4~h.ds9PlVzg!-~6i p#2{Papu?ݙh3Ypk'́M3|}t|k$'r>۸xf337AB Xmus85r a5dVȡEf3/14/$ [ƀE mحZ2 2U0*ŹhUy^iƒ '\Y4$mIkYC,3.][DKF+1,Y/iZ5:; B(tT% o7"$+dWM%A"Ws\;cX漈&01Uh᠙)c`yN;CԦaM?q 9лO6E%qV/lRU%E? W$t*NWN0K $1z9KNh| rSHŠy2Aýe Ɵ# sV΀I3| iQg- =j U#cm}+a8oږ[Y'-PkÌ逵࢘&`:]tpd~g}FE6<Ó cJKB$v%WfLO,z.xf3|R9jr1Wz8HPR8;=da(w'RL% opG$elH,e'n]ILml(+&;R>Y)dExnD b'y#hȬ]&d<DY?lCZ2!RHQGvҀr7Js߮Y/JG=iRYKc !Pok2fj#fJ#FU4>9z#ݦF4$&{ȆbgۘRN춅$'Kaw~'`aQU0{ٻHr%֒"\  FMJZ= 򿧪L==ݒJaUX!K6[}!z~ ֍ q27=dI#~`uw= 7?2wpQs}\;wl7B: \//B7)æ1`29s8| E QݡEّ4N5WTR+IZ(/{aX:CXGmJ2ű3KD )+K:#Fે/d4CҳkߥF*KXr闁宥'd#<L w݃r4[=9%U?DZ Qc>v=Ga<0;̀ j`VGA_*&eFXvאJKV9-=!}iKFRQ l'p.E Ѝ|0ZI` =itK7]6>NnF%f8 2 #!T"(%c7J-8 lCC'-Fw%6)5lz5=kmݣ%G$w0%.=JݖLm8ܖ' q`PC$LIR"}`0oH? ^8Q;ʾסᣢY:Hiz_";! s݋n]ZCv%BPy=mK# jMί6^itT\a`n+lNuݶd&Iz4kD&K7m6edhj2(ϦƳzI,e߶W> ,ce ҍE &E]M29CQMS6ZA$2VYul&Шln$+ɍ,w;%%n}kGmD"[I 6 |Z1 BsNd-pcS(R5$3 0:00{w"3O-k}>[o[v4t3V# "Ǻ1Ix(׭5mM.S[ېIj=Z6Y 6*Xh ]DzzIO5iNo|MB,if RXUHfʳ$G6S*ra"%gkm[Rn5fV,=adCr\oIA9vK#묣<ˑm3БŬS=CK큘4R#b͑Bm-(pUq"EIbXxoJlݡEv?ҁy^0$˞s lPQ]M#١C$ #3&Dq Drś1XD>2w`Qb/^D*,մb3 IEgH 2M* XAّ` $;ZN}h-ݡELS0iY2phl4`3*SHfFˎ:x5J$M@a*m 7JifB%jO@'AyܡEAM>n Uѫ/-roQݜ͚!uǣū>?r[ߞ|YWsgoӂ ]CǬ3XcHY-HQxr82~@BmQV3xpQ0cvS_|L۵5cX=8XHd+cbs2BMZ764FBa8ؖ98B"lZQtykvl!kEcZD6u8ʼ!(>*l iRmj(8j`D.69oYrYGEbyxӞF%OKM}$TqDп~rӿA+*-P;l]L+!z|~"(ĂZO俾1sL!,{ڐ[iPW|^]|j>3u77\7B@x'noN:JGyj.~{x=8weGw];pT_z% 㡑kj9NhM/~qMxQXKҍWG%Na qr|9!'l>#ދ۫Ȏp}yt/#S9Y!^o 8gY{¾Ox$:R!4ǿVڜ?Ήx;'휈svWI(yjL릤&'wnYP28ĖTn 2c̲"GJ@Ӯl(>OMG#N-$B MG7?ޗvAN/*5-f-oWV:y-"8žwP .|/w'J*~|& MK鉄'HhzM>l1jؔKi9JFg$$VF@!'h*a~$v3jScϟvs2\FӨ;)A[ 6;+YWE|÷6#ӳؓ :TfJ3dvkmVnyfeI& 3sxǡ3O'Ζ)`T=j_jX&݃9P O$RVIV=J_RC m4|aLx#蒗:V)mF^S;Rc_3[>sQ  Z0 A6㒩}CAfȬk( ޏhLI75*6չIlXS>hjQAj1&Ȱ( ߵO\$Er\`ZSwrckVFY"s(o\m&'s bOS4ڽoRMhY(7R{ 1S[$ VYRmNmƱo9Kj& ,:n4WH*x̆ڃ_!~ zFxle;&g3BQi %su_28'3t/,NZөuO%.&HS 9z4b%3ʝGPE96F V~; ޺l< cR5CPiи>[`a)"c*ZyE8\TZ[&RB;w0H/KLG6Ь quACkLSJm@ϋUn3h.QZd=>KUˠ *]D~Ɂ)DF{I秥Ѹ 9>6jsnTӧ-o+Cr$uice>_[{XG-k}~hS0~C+ ~۽-;3fڙlb}n&m3<)E6?,bOcvmhSl # ")))b\͟MڢSKuw9MEj%w:B(}:–dNۿFWXQ_׳mY읈rbtRBkUh)` PQM|P\8gr&R_Iaѷ- $sUm߃yGk 42]r彳U3ϊbE$QVIҊr{o\ZtXO WoUt\6|>ܣݣ[BȒ]5 9KoTq߷ $[NF q*XѧgC"y9䫾 ռZQ.7G~~'`zz땸qvƠiғڨ`EkyN@P5Ms? ߦрTK>=rG&N1F\?Uu*1s.JuJ5=RZJhkV=ia1J#1zwCЖm!Wo*jQZ$'}oq@B$ɾ=:k.o5Qu?V7c ˉOuw_N YlOix"&0)/_u]|j<1$@"%BDRǂh)r}s\pO .H( \8ฤDs[_2r\!ưldMd ;Ȍ Rfn^+"+vRm %WdZV dux9JX]ǃ % 9qTCZ)݈d 34"Ծðppv0R'WJfR #{Iq&|2 {`i㰔ReBWҝ=K>,J!\9Y9߈.櫂Th&U J~+VpX Qǽ12.X~AA4ȓ:,߀RQOg/%[2*cJRQt 4%w!=)sl5BrNf*V o?,fY̷RWOh[}muӚa-.*X"-& 4Ff \]mm["f?\p/W˲i&N8sYCi[ր-0Aѳ\Ʊ0tmI!r8W 9L=>SX]`D%*?5^GV@>,hf]ۀ.t T!MHZDbH5OpXq*,xwH8'N_- 2Xs )Z+u-Aqa: " 9G03idpX1JW搹myЈڇ}GczFԒdzx(j‡s)yB]؎il?P|#u5FmB T o({ " iu:4hZfN {lj3̧EjaR=n'6%P*}83%U5 T,\T klyeC Re\^GՈy2S1aUHP-u<1WC'rh3آ4/Nr3+V"I9\tk?SGsv̊tI1JL,Fo/P|'#:xPR(n)bx&c)͜pʷskI/RlL"}0'b( L4WftؕC9ЯDRrKΏ3\:XF 9KJYp`wu\ɺd+=>5] A ]t 3B#pKZ(Uӝ-X(6s-e*K갬JGQc^nKF%C&PY*Ұ"Jg[_*8')G)NeIyM8c<񲛺lijJw#)l|A^`kl̗rܔhFNQrO`*IoִtEHvô u0ɱy8pִhV!oyXirN*nF B(m4F_0Ę0ErL.bgnf"718%[_@6WɼȔ<-"aۨrfĜߛ &. 찏x.J>Pj9QWW+"J+- ;&2]&yWs<%^0nI 'Tᄵ\ XS;*oSFɄTtvp;I>ŴvJ;bs&RUb{pE"wW)P5N 0GZ:|jgz9l1E1LQFR1[S %W>`{6޸w֖o|9E!*Jqܬ5v5lQ+w"2_Rlqmb+.\6gݯ͑e'rqEpgcez^ ?c߈]O=T%(Zzf;Ζ.ƪ/ M ڸxrn\skjv"OVeg6 DHd;q@A_)MK3ZD"fwV#SGZ] ┳|>/J8.3OWal4eeoxjv|[4mFy{"ˆ f2lg -^ K)6+,K&/G [[Q\R}˹R"nc5wVw !zMB"lVpx:#"Dz'a 蜸bW*iZY-iV`R5/!LI! CK(CMVQy2!אl/x/w\/,Uwuyɏ՟O?k8T$mN(B[ԡH.mlV4_\X(W9|$_^\XFM}83V"`%ӥuk DSVQEcoЇvk@Zzc>'n).) E~ž֖ l2[6HKOPi ѭ\Wm!]w__2aHpu|9yx3s..x⤏Cgevܐx]2d!hfTv"7"Kك 5:uGS\y!YWb cJªׅ *6]-ZHE^,7 + 28__]-ffr3W¡av\sGTx q _B &s|^Iܸ s/5 l~;_ؠ6QԞnBZD-ӈS !͔f5؂P^hS5Ƅ}#X .,G6eԥb?efc[k3qo2V׏&] Y*oz7נ4ZlF$ra*h蛍Azz.Lþ?wWt0亳Ͼݸݨ ȅ {M~x3 N Dۿ]@$SH9kr&&Lw\C؂kn;N~ nzq!yn_|rp .M^zn|btrVЏ[q(rϰgtb&/ñ9"3|v|&)a89Йëp,l5sKþƨwy85@N'[`p HIN[&v!`oFݤe3F_#AT7Ï[8 =L$Z{_/>wQ^ЉXG#u:ؒp*ǽN;pa)B%g#VaSt ~w;0Es"\gE8ʇ_a/FgdcƐ>xpоɹӆ.8/A ??iTaFiciiևMw^CvٹoMx˞۽6|<x0U2t 2A`5H7^r\? ݫd$MAH/x}7!77ކӆRXYt_zn% /YMbQ掩;-dp&G%CoQtpt'-/\&0>o/dzsM-z]&mqh~ t,i?[81!`ϲ{/0_n[ރՀA* R^o 4qn/1BHpJ6 \̶e>pĽG˨cdD >!Fm &7edJ-2}߿a o~L_ Aw!̕.r@}:J xLqI)rCǔX&g;_u/ԗʉ<8LJh4~D]pE&X9*g$?& f^&*ESB +܅jE[DsADif,Bn2U,Bk')Pc] `Ag?c5ai/f/La@x ;p<:G#J"D_ؗE֘8G$@00 ֟橍~D+:qĮ ["bI*F 8i5F0GR58v޿}+XgLӖugo^{czczczggSs1M3ܱ gvǓm+GZm԰`.}5 ZC8nI] >}*soWS;ŸD;Q~19+Xu0X;woQ -9Cn^ꃿG}!sc ~*{0ȇz T?P{uP?CK3H&+N?gogv8WBS L[ /ZNp[^'}*l OICV-=BM63?şBON(4h2OKn2V>8Vu?}tvFQ٫FXt<{^&xo/+EMw^I62$`y&sle!"d#Hxyr5Vύb&Ox1s+۫.\ ):9m5J(Lm+l)e|7y54,?2;l.NOOPE1W:x)H4Ngv9WH AAru^l\#lmuho˻g櫓 mqSZ.ݻ~O&&(hAs=f٢ RJOJ)쐄чuձ9{M4;y?qh#Zm{}r~oБ<3!|8)+H̆U(US gXZQP([x%+c-Q>nď(Y)ve&/.ETR1UӜ{ˮCdn39f>ƞd!<$&fցɉvrfpl =RMP~j$Z;XLۇg#d$V,EϬjM>deRQ׍( 3Ci"8 ZF))DV&SuE:(dْ*c8-;63hTxx h/>ɀ 2x X|%O(K;Isz {;;F#<*20?C.Y:'օA edQ/9`&RJѶ[H)#b*)%SLR(h%he=x')jC7%̯>\^]]<7^}7EHTh'D~wucZ,-g7]9(Y0|軬L)""2LVN f[NVhsp J0VFSʱM+Sg$C 10#ijev+lpH+N:MD;[[Qpjj7sc%;͍#?!yd+lɉ!٦vZޮA`kz(QYZnk]ƶçi, n^2eI$rub hͩ$媞(J2`st{^W-&k3egN%+$nZf@r9+R&Lu$g,grg=Z$K31ogOw*#`[S͌VdҮf%^K&0]C(c48P"&0(*gL(:3#L!҄$2z !H'GɻYʋґGKڙ-ꡍUUhfN__꠯L@nC+Rrv9t?ڪO#}zaޮA%b_\.lؒ%N)oWbZ撏FgiuΎꤡ;*(N*K,.bypz` b9BfrN/pRDWO WLpTzLR̚Smrfy 'WG#dBd=0F@QO$csVì9G ƏoDfzCBHU|̀z ~Zmfa4g hYu}|=Tةr͉]^xZ,9#(=,ld\g#;Zvu? )vTyN$:dD%؟[Eh:AqC ֭T*A7]#+Gi+S<݁vֺv o;|CLM:X15$45&L>q=5('-D`iwSs.]359uO9umR쵸Z9ם.څZ!ͺY>wfF݋(dn\}oV6r$Q78d)d 1aљ:;GF/jzӤQ9 d7k#]`Ѱ crdU8ku̢DWEU4 |vc[=)#͓goVg!93{hPfPoK{|ҟg{Py#4VTߙf/"!k)ؼ9n3x{Ց <2屢 NWUg~y…=fF?KC[j{K'xϳhӠge!: AW/P]zKSWR뾯w;$Hqjw8kׄY{[Qmig yM}偈FgX5߂}?LB -&czu};H I=F(Q9.94sQZC"t]sr5Fld/kώwCăwo/ofcVwsOcqS2_v|c\>м{`Wyva/߼}^yn߶v.H}$Ѱ*Kn!ޔz ϱ1t~Bk2j~ %gNp6F/諮3bnQU3̒ǻuN*ATQelR%3ag2YZ^ ` 6oli~s3XYqUq?/cGRh\:Y/?ŢK)?l?;{@3A١dփ3uQШsPd`yݭ0U_Ej/ `t}eQR$2 3Tm5<7%%SfR4ˬuГ w\f~Ǎm1 M1ޙf(+%i48f;+[Uo8.z?>Տ+˟(oMtN:fY7^HLG1@!.T%QP6sgZeT6N>-p ;w/n{N5Pe/QʼnOÜ9!סu~Fkc"?=pgP~1—&RCj; FsOހ^ }~"xi?sd:[V>fʺaCV}w"Lku1&I2 s<{@|9BIgxNFd( hу. \;ZrdjhͤB(rQQJɼR2}ꭡ`pz%Cڽ 4_][ZF7 lhS<,Ao_(~2+'t~g_:p~jqY.]1?]\U*1;pBWapwG`NgObG]xPQn Lg1l(9#ӟ c%H]!=(J,$ELֱ[S bDúSzn_4V!!'.2L+|gfea|d>avv i\FNt5[Wc6G_[xuo^nfr>ikq#MɆ9p1cǍzEu׾ <jǝZTW2Vgk2B`K|Y$*ƤRȥi1$ۚU@FZcpx\>$o}T|kB⧆/HWc3p0=^2kdNs>~^[W`MJ)jRKq"}ǽIp.S4w'+{Ҫ-yd/@RQ2c(H"I!:$_ˑ<DVM&&5T v\P eK/sע>~yqL.G|P*%_K|\K/3¦ב/pцS6[ڡ9\A)+%tܴIH#q@g)rk1+0bƺZ tQ .:+6[ )dwt-S؊5VT*X;ֺI&DKd!hM$\;Wk6*|NFw/_p {5w:ACEg")1\^U-}j &Byq}[Tedz״VH,؇Rq?<Y?fj0 <{o&p3TK ^Be |?f|w/䯲<6YQf]@͚m7A9;ցd{WIAcR1ݢ'ƽ'7=@nhk&\`҉^SBعF l*omՖ;W L E*GclKE0k'74̊bږ$48^L$S,1,~IR:ۄ+..be­ɻ!ƌd0t|猩[lŐj#e&?FʼnKG ڧvWߝ#18f6)J=_L_C4BmӷXonMw25$uYb#jsWh:!{y 7E~rVB}~xX~ m6nuy} }8r4jh+l 0%=e޵LЌ  GRL`J~ο⩔tRlLaUzKBˮ^%Rr`;)t7!@:CdɷoCQX}'vr1Nd:A“&9(#I٤raT)15SU(w3#Vx[- s;(¬ϳ,9XY6^LseLju dy}*h>n?nڲ){$SI{ghA8Iq7ċ (~rJrrE1a땠: *6ZpY01 aeI{w={}^EK+bW#>"VV[v,tʗ^,3BZ$aQ' ^뛁yWŵWm^jWd7cwe1^ﺙՌʢ./w`)վM8Wt9DҶQ=]7 xOi:g'=%=k= ypn?n:=>p7ew/ƳˇOt%RtJ#e2eѽ X\pz6[t`VIy:Y,ǮO=u\XBSb?0 |y>^@Yr/,(vG>\̯7Ou1od>䷤&OngH V:P2εj5ĒS ߑ,ZÂWե?o]{ߒ}Rmnjc/vU*m֞݉U5^ssK]iZ^SdDIH!Yv<\M|jFfl}7Vԡ KZw'rLeo/!Vg֖@?%RնJvóvr06@S3L{?bR=u8X8tސ_FuӢ=w- :nw(;l˱dI8nlK3yq8rx;+'MfFvͼ۔}#Mc3ٶzt ^6$Aufٗ(oq wP؆RMv׺O_|9Ws6v]M Ȟi1Oc-n{q-)E7U/Lk6ĚDI)ĺn84ǝ&F}V&(>7C!1::OaZ5F@FNijfwȑ9ZsR*5)xZnbJk B5怓4j̅£hdkc.@F~( g@iTgU<hjh,Ti0 DQ҉b! 7w_ PrX rYK5UAfB0 2/RmHm:dΡw߄k]-ʩ:Ȇq Ww~0=goxdcwIy'ӶB={s~'@VϮ_wOO 7</{xpЛ{ge̷ENP9SJ>?4{\ۇS^;wST}|#eጨQp%ublEл8}cwCgx,{F#-40%a9tpN2lzR<E9P@`Ijq k"}wEe+*mWѾe/tM(ܤ4ƴy/2t 9jD0PwEܔt+9hQ𽋛}!~:7?7瓳?H'tY| Gh?$@ T gZ49c GSsI%Qv@skw=]k:S&%y,ڈUVSءtk*t:mҴn'ZU!oEj\_k&c{k/~GcuD?#<,}Q&i˲x?mF nF?}.D Έ% wi`W eJytSD +d=S341bjA;E#.UXfoZJϏ2= 12z{p A5Bahf,3E95 HtehjHRvVA˰Tw` }"<@f84#*jV;&4c!BLEpz)m9?XPlsEUжYcD}\V R&2@A}L3 婏\2 "R*Y *sB3@qhѥqĉm\`Z׎3(Rs=Tq*s]Wd9H&[ڙ?ѬQ邕c+*|ŽL7VZhh R!&EhXJ4M4JHfqFTis,սV͚1@)ѳ7Hɗ/W&jpSؔumJ Lgbus.}COKؠ80t3_dP¤PQ@nv tNA0+cT"M' .4\y ު2՝`B)v@Ҡ֌ZICVօQXgzys@8(*:*pKI= K`)%4z)$:X@ 3b- ¥¥  QߪJY3!H ) %*)A VT⭧qWS[C[Ox<("2̛%1sq&O  4ƨc!;卙Z*^ѦH7{c}N"&]*.7Ewqna~HJ]G}aW_HoE"M ESXRO ɚ1VYm:-^7^wF Q0yqJb^ EZr.'s!ƧM]ᬁz hSh'n1"Vk9/;c5Bߝ`F) kRc q;%@)jOReR:a~#/v(& غ$/CvT(7[z`(:ޮ@dLeh0-nmM 9k.yC՘P ЎDd$M42V9fcOdi@:\@0{0_L?5Ri. @ҳ`Ϧ`J; bI@Zq .PEy: *bި#)u "8Kռ$odՉ1 a aŦfTRV!~A&(LtCD7\҄[k}-aqF?7m\?^o'נ䟽Ƀ~#ـJʻ&ag-~qbEgXiRQɱgpL&JSd0;Hq8u;>7c@ 1rvfC JXZz XWrpEs)%tMW}l<Ś&;y ̿/)J]lG FE2>cE ^NyIEm!׌_ISaA, 00X`wITBdreJBHFeQ(4h/x |>8#M]6%EGӁzV6=(6hF썯BaoCTICxygWu-4{q:КQ>Y K b$ n nE`S/)N)UiŒDPHG8>&4eޤBkdh:P#tF  !mʪ!'TIҷfxa:)Ap$AD oLfnoO\B}v8GՁIHлi&>SPg5 o0kKM4=n/rzBYUdES rѩJSl*S'1Wz!fdwZ)Li3F3dD+'4W1ΣJ,EO;m1L?Xe\vc2^kʥf43R&NB4稡N yܣ[Z0Ƽ^[ԿJ=1t5W]H"é:%)Emb eD,q 28 ȕ"ӡtCs) P[6A>̈.[_ #D ѪnPoldWi9)mV;D ʾ2V( k4$Rvǒ"W`xK드,uR<\}Y>%hEiֺ.J*e 4 {JODx R3$'hp@,j1H:Tb+○ͮ]h&N0'~tc{/吅] q cXςo5SKD*IPģ_#<:?):)1Ա(]|-cK:D |F!N?>Guo{U}ҧTI$6mpً0 CX)XEDPXbqTUPhA9xIM<19,ZFaM(7JP-d5&S ;y$qndS'edJqESq04Aj3HmXj1z4Lj|Ω)&Ըms4!mj2 pM",sq=r/6nm2{m4aB WPk8z;RՋ6Y E  lظ@B5zNADNW3hz9HXD Hٸ*"1f7NBh%ˉ ڗPqOrAs{NB6TP`iq!Mƺ+ٌpsw<\k][ԽG)uU<ұt鯳/Ccr~407ƽ'W)e10G,n^{=|r֛Gx>t7Y&]wy$B:U9=GyBӠMEJI:aXJ'o vHiaȩ2OQaI($FW(97"ԣ9Jeuu~rՎNtT:y:{yy? VYkqVe`/f ɾl {gIvEIi]zDv[-C$ ]"YEVw3vgA Fvw')m h@7Yz3pTNS|EXI;% /uUz޽/2st㩔C3Z)w9 AYhwW#Z*¯BP6 =J(L*p+[ONq1*U*`Â.h5S.,eZkz/LH-&DJ=+PxvA)J`Q8nHtk4t|8RN42I R#aqTr1/3Ibm q ""T`0 ffnfHexvCN`I+a$wȈ-}8fj 0A=0T`dxS"( [D$*I86uTvz*=R,N)`+X5b[PkׇnfRN^'?9-?9\&4bUFS t5~[e9C.%dX9],g>](>^o` Cg ?{kzѕjѵFW> +5eqIϓs}oxn$&'mg * :y[ONӋ`x-̃lyAh 60kbbT8Ny=Q/PUm`Sj\8o!>kwKDWL02P"r>ڵk>d"d[gZ;򋶇@8nE~@k&7ZY3I;f1*8 Q_z{BEu}Ԏ/go1SZt7rE@+aD`PԘ(q*WH{.$F#h [EaprÀ**ݩ cqbT>KVI7p11AARPe\[0Th N܂ N/FT{@t\csptfk`v֭". xxv" #zFmiY5G&d{ζ 7'bLoj8|T^ .yBׅR% T7#'54 ǺG&adgpx?}>p?~g̛1L៏~YбHIws[owr (3I]x|ޏP~~ry!ΤCtVY cǦ: Ag5˰ quR$:~U"3TֱK*.`hrPđ_p,uE8IWկ(N)HԙrHk"V`.< 6#?ÿl'3ٺvpipY*Jʑ}6GV$^kD ] BP@6uNO _c>Cm PwCO]@? ķdunm*m%Oȯ.ן/9Ht$.?OFUȗa S&u1J.b@CR} .IS9RhSa_v;F R%T3. ٹR[)L |[":C45:JJH\GLON$n݆&)-[x ؕ\pem.@V\Mr䅚b}LW7]@5?*pDdA"Yül9K}Mhf<=rR)A*OBWlJ3Iax9%@kDGZ^XFKHPLfϨSs}OTε|Z  Cۀ IdomS'|n֪αJ]llXBVi'Gur_'Gur_`P>6"\qE0`^QT4- GDd*݅S$ q0R=5ߢ}{b.wBoۛ_ 7l=W)-jΛzw0tTeuA;ZAJC 9˄A6(Hڀ΃G.$ry)ޯ m!p1%&)Qt+"*g~cEh*띜nhWl&{'Y,x;Q-:,!}(bx8C H 0 d 6I1 v1ΛnP焘&`I_#urqetDGA!"mWrG@VH(*rJ'"dTG T=x.Cd11jߓג(=QAfߍ;ߺ@F젲.MOJVezU #\0Yu=)%o!d=KwDƞ|\ q!$i2<]c/D*<*y8%TJ4> Jf=kfܳ).#ݦ{up}! ʞY{=·X6,S}xїL?ۅU*iR< άCWL W`2#oN>DdgZVN:Ii|1Y._8ܡYOC=ܹ``:R}QӖn $?b>;1u0 w?8XoC]"~y|P1Ob42>Q|(e Pshxe棥0\WcpK1e_׀([:wi.h򟆝V9:_#0c|]W_] ֎e':wMO)BXVoF=¦ɷ@ۂز4>0ͣj,ͻ?I& wr>^2z@&\C0n=tθLF5;1{wzj4 U/V1j]mgQr0T ӃA8Ǟ⨘<|h7*r}SӍ}s˅d1:K!QJ QI?lj6rqPX 8B%!nH`ȿD|&셤)0M H$*kB*hA#(Cs[T؁I L+f^i:y DDn`\i ē馒iBv StBc CWƼd휡 !3{lqAܼMS-4Pq}4i=9Ũu43ŷ%g,{a)tK a /;08}RW&#=t*Jmq*Q#6JB,:,|#1Z)AM2ХF̜[:=Vj¼,8"1r(y3+p@F\*a}CkdzT 7,8$1 xf` yj/,o䝭9J,=c z@yF>LY[?ojW&eW;`B=z Nni5j#o8Kl:(ȭ껯p3^W7{`5!.Ways:)/7Ͽ~@M! 33G-}x&q#Wyk0zҖO0 ػWHcJ(Pk 9dA$ Lb̄SЉIE 0ݼ7`mC$9HS)kȠc ّ9&= f'CjmA% kBZH%!BT4d -a#U>VY6 Uǂjt]xaoemm,^}H<+xw\_oIoןcz= `LqhNq19JCȎ[KG.fRSԇ*21;ajm AJ$Lui~jUg ٻ3ȍIEDϸrNM(!rA/$hfSF;ݿ^ԥϓ%!qs ošE|z^oCH/xۛ/bp,xǛ5l~}u]]Oz)T.!`nߐ/篯]^͗X2N۳O~@d6qk| QYh8-fƪ_lx5Zh<@~J7ޖwՠ+jfꓻRѿJW^uq(ӅFe侞EMu9t#ɂU BY\] `X,X㉼9FV^XZt+3H71 ?~2f:{`;5Wρ5CWBݷ"̱0oǰ;g{&6Ik͠ kmgװՖӅJDUkҁ+fZEZrD@Raj{= %> 9ܦ:n8iUW.T`@h߮ﳼ >h6ęC{ێ7'kF>Ga%W[N:iu>Q65U'z3`O-Iȥ 򉏑$r=.jxzxna/SAxmzZ"n]k#3enk؜]Ϛu~w"ߵAx5tN|Z;ȧ$n n&ǎU+V<:9Wа%)5h3]OOOd Փ GOCul2 Q r Aw.d magweЈawōfإ0eRgU\~hfS\A;wYCnBDSV;00 +Ço9fBrc*U6Ty#dtj9=sJ&Yɜ@s){8|)VUtYZǙkEVƖU'O4"a JY=O_!McTх<zl߭ͩւ\Aubfea=z6, |ա>s.a WlD0<# Lٞ{n~ .Ԛlڼ *T&k@ي+&1䥨6L{ɴ+M fb'ѐXeD@Pc`6U0˓m (P[[3nU)#9S@tUh),n}b}]ʰ tMvcދÛEZpZ5}:/OE˩eg(-!4'!Y|~wLCId}-31N?Zc:Xv0+U;Dv  w0Qp8 FéJv!5Gɝ)l?xi Hqg8]ygJ;U7q6Ov ZqNJ~Vi15 56!VOQkn~|uy;bl^9#^/oK"nƘFз'.(.*J cYZ;6_`s4OEv(~ʛn/]-#(Vؤw'︽H(+US%r*>hO\Fha:sHd(<%*{[z ,qYh5,Dlu1ꂞ8`kKJ) 6jc9%$K$dژs:i<*V~ZF,+UyG_K=fqŎ cԗeg2RӱlEW0UW"->),~僳xz X(QTiRÊD8a5=M 2^/N/%#Xmc0;uMޘf4,d "o-T7|z~z!]W+l**= $N!rGz*desJT;k3ahkHGR`D,};+{UiT7:Y_l؁Ba-Æ=HT5M5jk')ށ@w#@_q,'Y ,|I˸Z?w |r}ZHnݿg?ӳk_ڏRMOt_l. NCj2Yy'~scI}cI}cI}cIƈj|f4TʁfѲȵp+abdVCnY# =N_[0%͎VD/g":zYri1/?Ƥ?Nc7e;$7BNs!'\I\HS>$gs9y1*x"ptx \zo#,ЫG߉zG{ȇvH>nbՓ:V=cՓ:V=iƪM93&qkt. #3ѓBRy` ,) JX>NK>N^.@>$6u_v+N;r;)dR !Œ%*$$M6Fm7rRQhdL*4OJҪ)8G]m>B)%P4rL3!!șA֙' h}41ʖ>n%W냛5BĴ0er^ÕRΑE:.}{Q !Ӭ(lt}9O9'M9/]ݭyA|˴YzMG 7|ۛ^ٚDnK {GC~=DtS`<˫RWG)VO;gj2[K~z~N>m)INПn=D kmw8TCȹ%IZtGq)9^q)e0&G'2 Cd[2c }` z4 Y'97vxF_+yoEMJ<왎/ IĀ4yȒFJD~$ƍ.2tq2r 5$eP>J=@6"s -1H2}g2gX :3HܭLX*rb34VlT vA/ŠNj}@$RL *$mEBt(a"^L dʅSIJz@L.eZҎ'E=foŨEq xҒ"J$#}JZ&D2FגB!Ð(x>kdP@KimlXOFcx1I8S=ֆ8(HVJ!k-:n%= zWpW}%g+dD_o@I1dFYrhSJU zվwWpBFAbYD'+ѿb #W r#bP լG/r)tWɐYBQdv[b6FA,A8TZK4JPВv$zҎ7<"ͣv&B;-+%gI?=f=bS>k?D^Y~|d}w뼃ݠjLkɷ'kc)dT@xF5j6FJ jHUp|<ȟHe%l=ʹQ$,J*87< AW)BG+;jPCrCpSGjUdl>ls4zݒ\0:ۭ;WjgfBǪ Ҵ)eR Tb_( 6T`{{}ɩ5F}I^ DR޵q+"bgxC`{ FenGF4Eݦ%$HZ$A<QůXb.+)v#J"˸\7/ڀFu-v8H87M%Eul.W_C}Z9k^\Uy|HE;*VlkhVU|ngA{; mSs+l%XV5ir2RɲKp}2_^՞l0U lY쁪Nsb8[2]\STf*[٪(`^-v> 4諢jL-Wfj{ݪ'RwBLB";iyl0EF+n nS§qd;"\uJ{m'#k'V}H[6㻶Uٯos6!KDA~~Yq d`/4yh}|W&)o*>ϨA7<zԐ님Cy rg  yUQ >jzrOlX;yKxYZx;lgCxy<>Cm^BNX:/CcYC5!>Ez h@wh?owJ =3qm{vfMmn'M|{:)Dlx5'݀YĀO3x{[,SǗzFG,׃߾xɞy_}7X<73za<Ӫ<~_ʽ,|\OGek5\:l xj{3p=%X!2 %;thB%  I OĆHyҍXkYY2kpeSX~=c;ԝO.´;>`h-j~ Ky )]+1BcytzC]!]\<8GC=kdx3x=2՛ѝ?d.e]Vynʜ]p 8TJ7w*4dMG?Q1Llzo {3Vlkʸx1aɺԘE*?2LgWWL5 %f4Š`V6GSXͮ?Iy^\yebK{y|CyJb)5¹Ut9S1hb'Kbpbə.X TњLb_0(dJ:3FtV^%L9ƄzUD(_YLE:AMk;ORnlKa*# KƔq(\ FLU" 䮝_nPMy(lMr/BF剸mQQ!g dIic#,y!d1 ס4Du&um"ӣM0gPB.z޿0c,[h=:MThWelig\vR\}!le߯~ D` YRSs40z99t4%$p $ygJ4CT:)9%,3ꑓӓSBB.F&RNS~S 'yrnJ8GQNhS!eOxM <455%8 ӈ?94$0M5ګu=Pꀙ>rp^4fcaj{屙UHeNZo.Fw.dtQ%o@VȨCJI+L6s.*Ijĕ i|E-6}oxߗѕdOWlQaq0/bxmB&kB#%HʿI8$["lY "QUDgg.-\# 'TVB <#I|&Q KP5_[POvWKs IZb<9L(v4dUNtZɂ@-%D Q:(iArlAWyI%aIĒمMK;ˮB,%+ƥDR+9lds!Rx@,X6࿠Jk.hrZ\h1#^:Jv(yΛS#XSBB^:5M >gdQ !uEyF%l#UbA)UF+`%US˜Zs4lJlc-<6<ɑL T$ыF)%;pQYfdi s"-ph$`iaM[VkEby.dr`_d,8(..E6Ǣ=p,*k["s82+uJ-Wµ˜4J ]8N]Z28ۢzO1W5<4-=>q[f_g"y|5|7- Țt]Ew>dNSiW.y˱Avl}R|?KL0dCW`^w`0C 2bsE0ȶf9sn?Vlf Q11~y0PzƼL[~9=S}SKl/VwΦՋ'[ԩ 8E atE] A5GCx28g~o~ys5N ܿ:}8V({huZ~OV)bUVTtI ~V2BĐ( +-% Q؝{vQr@?%L x8l&ءTs"F|_3]MR2(dRsA ZLhӕeʧjUQ Q)Lhm!PQ`Sːժ$Jh~+&淂׮fB>7/|<뼆 ZX\Dw$堺JVrLJ& 9<+j/u$V"g.H|[C@Ռt^A ˂JP)@1kcQ)JS7ya+e,+zdKp0ߏ̎gpW吝ia.|ޚV}]?C޼^}~ŜbEW.F>ex ݿ" >r g/@1>Lg˕q^#x;+MۃkEBQX#3R`T `X/ϐ!.4O&AnG6}z˩\#~f2ko݂Xtu^?G@@@!..B©Fbz>s=knIN4ө68n~NIݥ(ObBEرaqU^ IJMJ*P]-ᐊLFpj 8:xڔمLNҟ7A~tFL6߿~Ƌ.*U0.Km7!lѨOwco)h}nfF3P%7M1k+z4|)L2g7{Wlٻ^ί3`az; b:~߁Z8TX$PLyUEBLSi`Qk,WMGE@tH~+4 {Xc00NcY'.%g8΂BkLKu W.vZ;KƏ3ME?1ëDg2*8PDTsebv'0IȡUwt'C*$8إuTٴ5eڍK Cb@k^V(=xVWТ<Ȉs;$ s"q&ZYe DL-/GHon}qw+MRTsp2aJԞ+Ue8zlڗNffl:go2>&W^jwG81xn@K []'|n~=&wI'Jn&l@DLndmX˔(NJYᥛbI{9D_(K HԣG;@!CAL żEMM.ٯee1GD?k֜`FT{- a BTRQ#5mSCVo@@5x_&"bToEItwޡ]J|*7)K[/-!EcZ\H}@5S\O14X,-5t*N(aD62-f!Lҿ.^ǖC16KΫ{KȂl>_5MS?+m#GEܹpyd 4{v`$ز$;RVJʪ AX|W;g>ޫϻ/]  v",Bh;kr/_߉;WɾdzYr{/LgoDEҨjT#8!Tۥ]xs'y '+ Ƅl)uBPnz,S“L%¨BWX٠S(dqgM__%V[%.cL33N8:t9Q9Rn9wW7'=[%<->$)sfy8faaE9qbkTT%"ig-RXʟ58L&1og2;};$~uiuR/nyޤm DLaIT[o2mds w78IyFN*EQь:`!-8Ajf]; cꔐ>$M޼VY1eF?Z hф/L!e3NINvv4MBuGO[[,X5:Le\\o'E٪|AqUy"(fPdkVJy'דV 6V ;O rW(BPUoYU"rAjK\@a|MF Hr~V n[4DѶROicG\vh?[% " jt] LɕFd1$iLHԈa\lLHyF&ENXm^{kj:꒵xƤԳ'}8jPu$^v0=-V(_9 #uŋ#ز.W8tqLHSZ>b_@S^YmJ\3&/M,K~z:M8'6of-&N4R@V@gR `#(:.Q U(0EŎQgڟE"pgl5jҗbDUՙ̆1J K{mu_ܴ@+Vm^C̩%Ƭn+^!~g;_z]mI(zN&A͉n6}޳}k}scjlW(B ϰEH([!8IyFNj٤^ʘH'kE։թ ?)IMLRORjrM{?[q4ANH{SqySs23ڳ p]:ln?8z#cE;s>Yuy-j:bљ)ϩaP*U+N5g F6pZW?ᯯލge_+է+=6i*u|{.Ϳ?Nn˷[M!1y;r6m4y=9Τ-JB㜴^*{!3oW)G@h cقh?c?}SʼFrL.BwVQo 9URG)۠~ umG;qm9Ňkwt=gHQ突Wvֱ-޵&Ԉ~B"BY+|αݥ"TG\&v\ߪ^N(iLPZjYOD."un@AA͞Fz^ O~ǡ<p89!cT \Z_~ 1m@loHL0{<ӗr/9;&Äa2| έW+ s© gZ+k}b(K-ak׃wI&?\^>n:`q.s;p󧅸l+9PLya宅J/;Q>|'%0W4@S̱缌*M>ꊹhER Ox ퟟQ/+0 a.C1BٌJl365WcLyfu529$RvN{@b$NvNEJnIb,)Q16בE"L+ӓoWڟQ 3!:$cJH6Uc?4, %؝80 YW44-+Ϊdն"\7Դ;Ǟ'TGsضx6zTg;*7,GD VB6+bdq 틋?/etΟEϢy#2 U.{o2BYf!Z< Nזk^kH}P)qQهJJ)o]!y9avǗ0~,`$Y-ΚL*2&r2T&+I9AjwۇˆONOWT_߾DDpVܰ7 ~a^Neļ 7[z=[̕얦~f ?p},ল7⒫hZa8eBk|64";f#T h?Ã%j?B::C!ArZ!_v)C .,+g!||kPۻXc{/>̍~rsb08fO. "N!>*83/Z:>Pu<~_e`.c9Uey|4Xv(I!DS@6t|g .|'v8׬'tXz~8;GZQhv؛B:pN$ 1IT[o2mcTHgPɫ<[Ni )9x%"sp"q i&P.V OP%sP *3( a[aUB y> H [C2$͔ iq`!O!R[ɍQ s@ ڡPŽͭ 1<A8!Ub156]D+i!pgnB+늀&:'d$,@4L'}o1 2Jjs@x:>$ưTP 'Z{$( S.\c`GJR[(rˁOu4] u0JD:-4D(F#/8%5ay<$,>ȸ㆛ * DV&F YNyTvZt>ڸ(bn$iPrPAt?Pm76pD)kWq'Ah #\ >C6rA}D!\_61`Vgv c=g{UvmV"ksB.bKp}6aNjl8p'X-:Ru$E^D^33/~qzxJnj@给,֙'O%iz%ibx/>Ƌoyw~꫻pe0R9dVd=QVlF͟=#i(vqJNw5Im;* *V+ \]>R- =^jbtVӁmԯ2"#-z[G `F<~+9#.nh!#@XCC&Seɶ%ku=KmC0t£FeMިL \ 䈺 ')F:woFW7ס f"wHc<f<ZR,p7,W[no]a ‚ ߆0P_ bWJ kǤ%` V@8\W+B5NVfPi; WPF^[o/QoUqV8SZ hjE+ȡ*Zߞ+H;  J*59]&͋XE]{z%^TQӋR/*}Q"R#sr|m*~ں|6T1Q)EFL bBY۹bi7^A*^琿jVS.?gX}]Y"Uoya]F!8ܽQf_ᰰ~f92ٜb>B3"},`ʪUC*E{}y\F!#k!W6=7޸ֽ+!!gww%Zƚ=\fOQe4 ;Av^/h?H7e^2/ 1:`TJʂZP);N[0y@ yKǬӵjeZ;'Y{K_L)I&9ڟbv; IcG-gk:a.fpuȫ4#&`F{4ugk"=6+kn n4!>ۛN'γ;Bt%ќ>*\HSE,:8j<0y@f"%,0,Tv_'Fp$W~FB4ѡqf0EP8(i]uV {}Z'@!(0Lu5μ -)dS@ZŖJ"'%tQes2Jw H6La?ĔO2c$Za$2(Yf ̈́G HW@0e.SmAT tKn4KQdT(%sy,S%VYhcD&Bp("lH-+Fۻ˩GM$L$L[2yii%,&&&Nf pe @)`eD=xa@!s7yo>]KtXM$A'ɘc$  J8g$՚ nqSGC|R+E wW_1~{6^.rT˽I EvSvv &Ò2tcI <#4g$0UFR6`ؐl)1?ҕ`n$!Т̌>`r*D/ɝ%߽kM{B !=y~y1x h=}r9u2?n@.g _t~ x:H 0\!7t9A: #Xە5 Ivw#$L(O9;]{Uz݅NXN>{Udxk/{2x_o~ y~y2;Y~s d4#ڈ{s,$${үo&xb:bsp)i3tϩ>"!fsk6*TD6t*G$3w<:r i099 c07'$|I.'gud?t_|qZU%Nc@5O1q$TE, :3(JX<>: W}ʯd[8Nj@/" g)_%GIT^lRy_>R5W)KŪ, f6q0صYKȮ u'{cuptTANFO, JgS5+"&0ܙ~SY-Չd[8i6DQW^vc/i_SIq8 .İU%1ąwi(mz׻[Oo^*NY׍(}G7?.i"~ϝ^_9@_kB͆8ܧr?qY />VN]ɳ[ F݆HEb@G"FeEb>rJ BڊѾ:'qg+zT!Pj#.HzvxHZ%ָq3j!. VQ qH/v5_׹{CbG9zm'[nVEpً×Wt.fs+oYrntUMm9Ř/?u _L#} N_3,dĺF+YχKn_s%"\fb eyX53 55 ^gי07Y#hJFpyLh}17w_i-sc|plF)f:Ú$)ѓG Fedc$,+7yT:&: w/ ܳqU5.W"Ӣb// #s ?O7ܻ܄?O޿[}~Sx`->ynF7#m:Boop:/χ>QgЛO;Of6˲pw&K>l@kL|{0@UD ok8+[zpC*c lXYj- \Ayf&I3:FR$3.bxDx us[̷x~TYĒe(MYoJT`+`ǥƐVSZ^6|uuh-$f(|;Zt>N='|  0cEjŲ$HiL4g0 &x8!IJgĀG2Y >VRH -8bTGJqpSةVd  )?a+y0lxhGL%R0=+O%@.iא "6[bKSoWc©,q|^kl ĝvҕiǓq ;LM4^^4& vbOz?{<ޕ`_x!nsx _ hRF;̫>0fxZWe@|Gn6Wd!_3l9!e] ֒2֒-ZN\T=F'$5^ɹ6&d-[jH'4c]ibQD]2 *D2M#&6*21IG%msP,4s05@W`j Yި/q"f-N[^l*<&sEcyq( X^@VOBʣ.FZ؋` ce]"U^MN3N_`~b3/$g5ǩIVԽzw?Mwv?aD7ww.!=Eu_1&8\vI2k2~Hi0O,o-t K`ӗl A_Y+b/SIR IE"#ikAԂRĝy*d%JpeBVoީo{ Z'&( vW5F8LPL8i|ڴjbH`i9]=p]8 M(m_y84<ۍ#DƘtk\ :I A{y.f wɓ˵[:pTE1訦'xEz5ǯ6dtq/0T(VT+bSι` M #E[9&,KU el6>m2s'Epl i7b.w˦䒠cJ۶_C߻6E3븝M댓i2c6*JNL= %bQ")[oۛp,>}RZ(X#BB( AĠHK eF\rXUђztjl;Sxk9u'gBeڢY~xIXK3!BlPbJ1c$x@" cń! e &Q. =a[b~ǭnCD;鼾ꗧSi$dނLhq&2MaXddxro,\$|3tzAb͵כquD;Rjwo|.l>mKX߽ 4H W00ČĊBY-+Q̨[f]`WGd&`Iak7'J| <_at/O0yr+A]B~" |0+rr\y4.[} yre vT+%=*/ܖ8`8=7)/(oe7HTKo+/2$5iz|-ݻ1C$;3Z颕dͥ/;66 <-|#],jʼwS&F '*c%*̗!>cb҆a)V1]2ʔR/>ץ/n\KE.GwrlTw`_>ȢO?nMdJH k 1O ׵zuE76k6'D6~t6TG_|gֵHu`H'Y:'F= DkGqK.H%&ji͢4;HD/6M`4~vL_d\ڸm0:&ٴ3 RiU57ܱ/5L OJ6bpu 0?&? 굻,r+wNt=o] +,M7/N-?o1}Q7k8X1kt]9r$z^y)  PXi%(JpgMX.hF0j0eTk8kΟG07IR3Hqȷ5@QqSIR5w..!?m +I6;嘒™ʲ8(˵ n,䂙`44bj$\a@Ԛ0B!者L-\!Z8';aM\֤|yIt ܑ.;FC~ FmhVdsm/xM \wIkvz1/{se`%qvy/vJ, G&Gv}LF#x>gF0ғ>Иw w?cZ9ÿd|J1ؽ]ߜdkj@M3h pb1mlZnIs$n;n }8u0 |' ۿ7 w_ї?Y{Ջ޾y{/Awbo]?'ݓ.~:˷?6o~}ۛ2\<(m~]?'˫JRG{Ŧ/Asn|q?S5vl^`n0?N\U{&${u0i{gL`|̔R˿k }"Mn%RGaGٙbU8:k=a@g=;h)cݩs5R2~irI1}v==22^'N/NH9}[?IDZ~JUx@m4YfJ9;6,?aqAMlt:徨|=y7]twݝ|zܙn~֎zI?'x TcN|doAt}w@Mk*gw 7\ L8 ~=Jn3x*O7';@/_<_y\ݍ]K6WG/y){0~ &)] Cw9-R ;_zQד7c濯Κ^}(LmfJz7Pm+xZ7/5z/*a2,>MP3L~n?|~Sgn#md| \&?|V0Fl;mދ^&,tL0(`b]gw>fG.ሃwNS2NVƚ99/}!ps2Y@} f֒#-@%x˦D$xΚd98je0%Q:B g4UZCz7 X CBb[pxD?"ɕVDD1I,RKRO| cC)g'o_ᛙ23e|gΔL)PQW꒤>dns0aN&}*؇1 ij j. 4 bT$  e-1Qd)qha9,tIt~}tui>qV>ңĞ<m/a'u,sr V" 4%|9{#Q$%XIX.F6{6鹙E\y; E Q:| ?AS UD I?Io+S@Tbyɝ ^O}(5lUyJm;2Gt3xW ^:%7Zʿ]MEqKӻ0 s0 sÜ&T,sy q8䁌@ZY#*BҮfՊ*U+߹ 兼 FJD{fRnB7+%oJK?$]!("qbƐqZ:BN7 [d7tZ~E(FȂc0y>O\Y0v)ٴ(3- EdUf2NF`+q]10c X6~[,vmSTip!"ZPjơUP4p_D%4Z ^ i@k]q}` ^-L(8{_1/cj~8؞Ҿ`WW^VַcҨb_Vv\E\9A޸O$ YqɉsG>'|ODDJ (s^ g*P@'*b ijK-qL:.pZAK0&&Q p `0+C;+pT|g$G<ي[%)m@tn6(]FB fJOF0 m2L@J!QS I:7QSR=ޝvnC%nH^]XJ2fiPؙWz&B4tH΋|ΘR4cn&Tmo8dTBZecl(?hzB`m"CK7~|?PPƖ67컃i#>/][Ʊ+}I9esQRR%>vEp3Z:{3ɕ?=$Ē9 .|% }3=t=h@% <)Fr~2FUAЄFwp@HޥZ6"gAf0cG 4 j5.MFr#ep4wn!^ ft߃X^8 5uHX׾Y<:=L^zZ>AA= {ֵmo݆>JkUBx79j>yަL+Jz.0P s}ÿ>(NqmC!##5ꟊC!8V䂃(bMżCa4'3:SFom6a5F;Z[EڮBg6\"߄=yx ϟyp٢X}.Bi'f uas`A G}"g(ohiOúԪ\?.ixvg>1?]yv1כ C^ Gx:;9-leUb0O mq\t:qq\ȶC^xmҧخnZTrDu[nqһ5nmE)%v۴y<7 &Tg (ז9?ew-L4?ԑvK}NN[:eLZ,`MsQAxMjSKM31 aD\[7u3MuГXmnSE B;@spPgLy)KqecdW8l[ݽZӞIzhL)2 e+ U i[$R/G?<~-)kף ?7xY ֆdlY4x!o-gJƞUf|%eWN`\.R»!x˂,\ӿ,ϐ_G?MçgM[wp -U\u鵆܀ɧ_|43,Y3>pQo73ִ6Z!dРqai`@2v>RZ1b?f_ :56a];T]ȶ vKT,NYZx?61=6ē%O,FvIk.S ;lM2)l",Lsm|*B2$xӋJ oOcSw={ԢSEˮ_W%{2+Jhx/NA D1KEIi<)raz,stj ;AN!qLJ\O>}U(5O @pd_NGw>x1=ړyݡ|7ؑPCq@C+$t6Kl z'@RP_@ہ)C)/V>kںG܈x!y1pMNsy)w$IЁM3~狘TA7K8|:yx:} /x~d#7xxt*.0t&ΜGg$o2j+hnٕ _DQUw>R3f{MN>uι3i9׌ps+rj.^,!*xf7.]> "/z{~>|H0`ZۭZ{9 <'$yH hJɼZgd$#+0]jS}ݽi2Eẵ "Bޤ/*~r3 bx2XiH+ѝèW= (H]yY˥hGR#*юSwtQMdŎWN`mHF%!T& WgI+0e aCbxJhAAb}ֲX DAW琝tM-z(pݳL=ou^wxzm b"ecL:ut)!ֱ]8#X%MM9>K.|J I]R344T 5LK[88$W Ϸ "_I\)T`Bg7[T8$>Lsi(j7\:~{4/dvVx9pv6yN-s.][CmY  c$ӔtkdafolQ c-O@ibsD*.Mth98QYJjTx-`NR:yM7g4IgWir/Lu>IRL@JlS ulOp4(&bN%Zdd RNFОg;'FF*Tmy~۬r ˍ`NrNf0F/ir┑3Q+NT)$Ի·+h>4բxDp%M8ӝ6Ό+"U Ta6 Xk e / hɊn3+5>0V X-3̜t2u \<}#7#|̸$v6C39Mii`SsVFׯ^)H K-$%fS2Z g܁P*0oJFZri#BO!UR;G8\12!I5=Mc' [|dlUQ,7)e3g*u `́M~=z7a^>̝H=qti6l;vm7~}Xk*Qb[Td))*GgmHr_]mp9dd/B?-ƒPbUS5C!2p5m8ED ` +ĸ} +J{xj? o)kJc>D?{M0OwGB'X920ZZA,Λ z'1Et:2e >]xಗn-iΎnP Y+%]Ŋ+knoM&54NB!9&:!.?_Q:E2Q*Gʪ䀨k|T,Hɑiʒl\򵦍`SF ٖP{;wwȧ$oW-x&bNs3/)lOOy ~dsv|SH"=6{c+QsXgn`bgnqdf3Wb9X!99d_H drcc-M RBkQ۳^ $;TvL׮G`CD`sP8YxkC-v37^ ,.\̚iTŠ=P&]mI#-?B(kw-_[ hb9-5u7D&w$΅ޙs'ZYvSNxev~z܈VKH[Rzjи+ۇ{~y\U˥s@#'k<L=Zz:woOUʉ>*TCM6;ǗDOӝO+KbypYh ZfZ_:xɤrc :fyQ'%h6YѝlAaV&'t\$9ywڂީ}t!M9ە @"wFJ +xB#>|u19[IklA8 I!.bq+[ISɢfN EL ue<$y-pLvʽX? W9Wcqh黋ݨΞ*:\d),8x 9k=/fTc@r(tBLtaO@XRf6^+<2:nu]S'jғS ?<|LAbLć>>}f{qU ;L=ɟwߟfrv{y9> co߹t7G7ͿM?^_eH|4ջ?sz̀"ӓon.~h3mӐmcމl}*[.uO*,0I4<_{ӫfz"a֐#A$ d;75<.YT*|3݇?kq>'mYFK%akh 0G|)ghOalI;)+YOH#Vv@P"`k " |]T]O]Nنq%Rf690 R CJ.rĵ_NEjKΔP([v2Mo)n.G!YypH5~OVRѿnyiq^oqB>WS6 G0MJOc"Rt+lfưJB8b+a :)k=e&e! IQ3m{YUi^tž2 |+|lң :h CfGMHYf<לX5dtdY%=p"ećĥF¡2U$ö¥4ڝg/[zr<7Vk<(A7M=󷂽z^?SaJɹ-s&V0+ˎ@@eeJ+0qʚ0-Z\OOM)1jwɢ *k&Ղ_x~Wyt3e0]T䊟KlybkxJ f/nrW7~s9o~Sb7MVxsۆ/oϘ -۽(\2`)$lN.&b? my*/wmF_`iC IHyYeC փ&ɡEܒ3> lVFo@Vb0I׻1nvd 鴔I\<'Yzet %'("s@eN̑ e'Gcz:} ŞVuFb9D;h oEMʚZ (`RgK J5<.YQ\u6dazgEOq %j6pԾ|Un=!' Yxw5t˭"yb˽('+-MTLrNLcΥ yծ]4kIe֛=O.o"7}fq@IۧO6ԈJ+xlvffnQZk&{R}z:~QH0ծaL{=#CQ2eZ\SHMcւu lA1|w^@5'iK xLCc=9,BdhMlGJ׺uWյh[@<^r\+0kT8w"6V`C9' *:d h/l=Q \Kxc^޺˓Y m'j.$@ vE60V , }^JU)T0QZ`J7 RS@ԪS,.Yd1>0Vu#m>ӵ6+Mtw !aPH:h:L)A)3$YrBs&9k@S<'@BQ w+KxiJ@DWNH[ [# npi8௉>xrM^.a["6lrL"'4{(]09#$dD@p;$Mл(}?1R2[r%5@R,J)}nz՗fa=]QxoypX@]\Ϝ%AvxDEmVDА7AΕDMHs 9rN)!Ņ2AQ'!UM2_tCނZo$NF6by`& Yn:A U\υ@a^OHτD?3Zyr'ɴf yg}I_/mD4xWQW:E,5"=ļNL3pbT_.d4=EZ}|AXn\k!0Ɛ!/ޖ6 lG2Yo/ [܍n'͂wۆV?`B׋-י-=8sv͢`x}ﰇdKGW3%\bFf|OFpTms4^WιYFK|£a^q9_Ɲ:gF[mχLu|&q؀w$lMza~vR{/oiĔćzHi13nlid:LՇxF}4O_"/kUp$Dx8z*4#'mmiT:ׄ#pj#VWXRT[35J|Xeo-0 'H/SEba.oy7\ *@jmb T蠚9W+oA\.g^QoMxkKx~H J5# 8ǹclrpJI #fi:YrP(HH iPX4=Q?j}$qTk|9\<@MB3ۀfX A5Q*EkIP֬>#SceCC5BDxz:}bu1WfnJC.G ^E2po.3.':x73]fy/,Nsh ~|[~5~A˳[~b9m<-H ~o-(z,; scUXȉ?hғ?5u+ Ip{ڍ*b1on=Z-Ꝫ6[BC[EhZ5fY/mq4k~h*TL C_$ yC!/4ZX :: м&w~CX.,?a!,Y%>뎓bi*?A>rȘ*F[#G[mh!-Ƥd(ZS&0&0ᅯa@@aa&UtrLgdk1nM[)߹>~%]/akwWwu\w~sTU *NҞ+8K휕eDt/V\F ;{v:fXj|&5fсaѫQCH!H,D[8=*|Qi\Up4jR(BaVWϚ6a,&}ؗLB򒓊o# CUٰXY}Ӿ;Sg=V/.9g|\^?x^o>}K\oxJ8erkU3BҶkz]Z\0Oh~#*&8]VKN1Zhp:qh:noS40kė._jpNHB)Fzȿ8n GF?OU-tD|WhJP &Y 2cMΤ?۔1k2duRKBMq21?T+ JNkC5-Lq+M6`!r0uWTF J=~2)N%ωcI8%8# 6'PT8i b+䠴\3j <7LM/y-vƮ 5ȵEtKUG5PR O kYZӌlۢM03s0zfv{glq} ⺐ ۝[Oqo?Xn'wX>4M+>cwu[MO+(ao!5ShǍBqVU&IPZI/aC:M,27&z>H]$*FQC`]UiJj6~IneS4E>AG7>-rn<3cO Ipњ#" Y[cǢ$eHQCMBǨ(E1̊M ŷmݏc-=Wi5$]^oI{4Re˔>2GP)0ZBT1,"GfTKtK ~vQկi>ïu>n3͹ Ϗ gG_c33v3}'XO5a4uW^VǑxߒsOqC6-V;m5#x,@-j 공eG5M=-~vGP=ըKҖL-L@WNd st-X-WGU*A CrIJ*VѦ|ۚ0&+6Mf_sm8)nxaAiXp~&ajN$gpB!L do7LlT&RG/4t"uE4J C-nP٣v DtbQGWyZ1m[E/4T5!!o\DdJCi7p3[,!:ڭW|!׶v^ݚ7.Q2%S^v/cCnN7hҘG-TtנDƞo=mfh$`7φX9+luCkׅg++=uGB諳t3ΖR#g_D\1[8ۤxkt{-Ziz 6krkUhd.biEalV)ִFJŏrCqJYmZ=`p TSq0Kw^-+\4 &`T cD(td"wnR&B0$(OLj:dʝ tDț^J(5a*H h=L~M~ /'~!z>#j ܫ4xZѡUaԃ-V ONARvL,Pkc=g =u [&)&J#$vagcf 1A _FdJ}AB^VPn4qF'H{\Ν[,.7##?܉_Χ[!5RÄ-gXhOOdjxgCokX ~&ȷcgt>o6^ű"'?sx"3&K #<4)J1di[n.z{)?xAu?=;?>N%Z\NaB=\J-$W5\{>w߼pn|З zXH˿ݥrЌNMMsrKq ~xJPE!f .@KDk߸,Ufbe4S;B̐`q2Qæҹ\!c b ef liVOsNy&N V8v)T8PFShF̥QhI嘌L 7yj%RL @!1!0g>X!DE8x*ql bޫO#8_q T|tO!?5/]_l7O#ێ.hVČJO/T k=uf}{]I(Կ[]xhɭ}C iɌsrdE6)F9@,G BJ̴g<3OK2Z~; aڜöپ~TtA3 :*N }X΄sZe<#*cKw?nG_!pIgu8XrlXv%Oi}ݒ+_Cpj%9SzuU^La{Ni ReΡעNA T_NA-pיW@}&(u҃BG9xiz;9u~)@-@ωjlO,KeX4]Z u_|I]}B.>H{vP禅g4SFu~s`v)ƒ$]0 YUɝbʁB$^yN6:o"qT蠔'jm@3%[hW$jXhҾ޹rw(0%kS CD<^r8ѴK؆G!2R1yäu:a,Mi+ ٤d$lgYoQbd?DK %EDcj# _(T!:wku*nR!}a ox/ Y~8H\=  w ₅/Wx 0Jl[͗_,z AG?^^G0:-kGudӏ߼@p?/ N?B^߹K{R;K~x@I/N^}{ryAAAտw\Tø6pt4a Q塌+et|[d_,^dL hw̄_̮.}\8u(x1"W2ş>://o"\8¦DlBW@J6̸DĐ;*d5V8@3{~w&oo (7ѫɻBuMDQ?^X ?Nk0ZsJZ,.ouVnU }}p7h(P$ǻd6m/û&4WӀWVu* 1 T1]8*MΖeYGN^ /Ey wiʸ%Ti_a4M0 &9DK~`6ȡZ*U k](`Li--J\ <)-/\OhK82Ms:Js${åd"v|?(y* Q؀ZԄZup! *Hh%C![J^)ڠ+:]x)}JNGQt˚`&9Qg3iUʢ9:$g!Pr!VHj`c$ 0j=[˩IݢR%fs QgF2ѐ--{;N+"}u7_Ћ.|-[_Z֦ /]Cš,eiԄv%H†6k ]e|nma3K(}yx o&9eի%3ʳZf:Cfͯl ϒe&`کO P;(rB[-H+,1N'ؑaFjJ ۴6]6h_8B]E)%<ޜtoJ!wVJL@+OV #" 0K.3=MHQqE1*mڂcai)1^h=bg]*zQlS9 `t)ڦp\5# 0 NA{tx\5yhħݟfPAy}O34=#>TtS/r~x}9ZO5އFRiv6ts=QBNo~}١bn(U7lo1FL-0QB.S'96FA@o >i*C8i05`/~/7ƺ|~qww/x|L,?/&$`>\,>ş&2^=6SLwjdY*=G=FNfO;TɩKqqD,q.[XZP zW]xΕ{}BBc9aLdmV2烳ˍyLeQ[KOhybcOj^~}~ P0اoo&7۽/}?Zbr(^ Sb:ὢ`dkM[Os/^`eǤmo.(~uT#JEaQ]+[t{=Y6Orꐊ_Ex.BB_OvmA>SYW`d#oϏLRQӢP}Bz$q^bB# y"Fmi7 %ڭ-uD=hgTշvk_WSu!!O\D[sXg;l;xs۳"6&t&(˻,ր,N:|ȒTVp5͛hzI#&t|d0#/7k=vJ"׏,4[|OPu B CZ1c-w f<<龽 .^"YoA)O,vby!lYvt^c7i*S|dR!jv#R9H ~X:k\/ _=nKCpu7n?8U}zPoYo|?s!fu8Pq3B 4ixwqMjj޸:^9`7+an*[C5 Ba_ia=UFJ&gt[`mfl8^BpXLI3 觑i&=@l'h:W :,e|qw7>Հn|+KU+K\i:`גhFhl,DP!SO5)ybj&j:v1)cPg%>cArA:S_Snbk>ΊZi-9\E;p&wϨ/N ktsե{\yX{xxaI Z_Ȕ _}UلN]>¿{i_ $*I`*(S+¿s[kWӁ٨1j1DFoqȡ}`G% JpbLN )880ְQ{XѹIʂ^Gq&bb `Ts"4ӱ#r D2zޔ ^5 dq jƂKgU# p8_S CcyZ 'tY1deQp#c4bɤ|c$J8C/vh ̼] 9L:1C-:Is; O p ^srփN^1 G#ikF/K9j|Q]rCghK0bJh @83j2YesI3(𭁻_ SHޝJ16_I]SȐ{HgJ:eeD \۔(뤸rS%Yp^|2ešsD06x452 )KS\44Ha<),OU($yi\J5 '^J46q U5ʬԔ GS:,:)13c:_y\Ow6\M9L9d 2myBBF{GZDX1IB6rnhVyV̞\[%1%wgQP,`v҂g ɇm[bjQ̚.Ϫ),N.8 _]M_TNBT*M?tdC#2ߜ^W)',!D1#GM>r*]"dѵ%2hx4'P @&ȄtNd$U ׳M4p^xJhw!p_eM^Ҟnrr91F*v+izxVzOqVz v'- i#j\].6JeSnj&´{\ׯ;ft ^SvkG4ސR?w#X1S,~_3V {rNj^c[9mmr[dr[U(F c=^k]hh ("ּ<jDiھ~cRdEH9?|q>=_|颽zbMENWScWOX) 1,G6̀Rm-\mQ0Dj2&,NSƛhA=f#XkpN!$z}vjZ?Pp`F{ބR~{UK j:gZaN-0Ё ` i &:T5fuz&agYje"P"R>$J[i`c$k1;Zj zfZdI+"p P{tt*yH>far?q^E"޽nn^KR/-oSA>4#g}<7Ȍ$9+g1VP7~[ {r&ʼn f(AU_eOg !.OXUlapC{j{.V/s9Z{.wN$v֖ Íc n0#cV]kg."NK. wh+T7၀n9-Ѻߠdp+D OG!gxhz)A\7\\7'q"|l.W.U'&i\ b h'D`tB MYک3>EpruLvk9YNޓ%\᫷|;im>s얚2Fl9ARSx6`pPHզsW؜cmBó6! x/VBZnJ'7J_; ?V; j)J^MD0ZލZ^< 7AysWV"x7_cM8Vv],nG;/n=CmiwNz`'ӝ_*1$ҜkI;"r P ̥ 8uIa;Nz(rџWIPkրÄx-oqN7zyF{1 ]ȫtw%"e"x.;QH Uzx^f*i'lMfs3_|s:j-!+ _S; *qYH\K$1FjQƿBh=!GJ#!Ԕ @jR!aR=T TGM#Bi5 \pqK)A0C?5N>))-y?3B6LzKGPzbV,֔w\Eo]e1b+r!*OIx>*(E;5zsEs_,wS_p1f4OB$ŋݺ/r Qwl?kXu `|PP^pV3rm1Ua3О?[%!`CC|%a&I 0J(1{o(A[A?4.|LSE~{\$gM g3ֶLR뿮ɮ$Pj2`A;g b80*d 6{ K$%!k3iŴh.R% umAT.-e~P-qF@!$WX!cBR!#P(%Q ,/]Lcpb1kv rmmƒ/7RUJ0,Plg.玆ԇT03 LLP vFi4ELI.͝ quwNɀ 0a2!aHL֕Ќrhfimc+` `njae8^vZ(NcߥK蛧Gzv{8C޾yEek7~8+}nn_] X`RK'p>bd'jt2zTBgO;S\{S4; Bf3plnۋon^"XAbEFpר q\*,shN7c@@}A%UObͮU! ȝ-`nR3ouǻXmΆw1o:N{ϑBU/յ361#ɾũ(F?TajI8 IOJ}kr|ㅚ&*Wd jSx(&LJsԁʤZB"25@JQIJV,'mޱy^a,f7U~d؏,G)ʗ7L1 Y$@&a*uxBi(t ud_4DTs2+;]CU*ఢa H3 YKVkjuFR@(.1-6R mU)|50Y! fVr)ϰTgiA$ND"E+A8fqXʗSR<~^{WV܎4Afy7]$w}Y:t!#` ~]/|]n2d;KN3“/_s;󋈖Op`6,VZ ʙ[:JT*e=¨˖QK)+kG-'krܮ%dj d2Ԁ%M 1hߒBG͗oH{ۚ3 R)8}L2_2s :!Θ鐐]MWʡ1A>}nr. ZN.#\9|{m궼' vOb y3kX[:*prҴyWuuFwF?}֕_y Gql+ykZ` hP?AhhaJ?ӑ}tymkA>o'+Ow.lЯHޓpDqs^t-t$٠ç霓p8 Aa@a9aǀ'\jmNVy (t AC#( 1 d䖇V_D7b_;!C*zA A;#@J> `TFB7j5v s:Q1}%(3/A۝<*>µXOz^!M/X[` &G;!{϶?nZnVCWZ=12ZvAmYF1ho /iZDt {N[r[ö[\{8VT.jU΂P-U9$⃹ߛGvُI{wD dU]jf$nJM4ş1r[I(кQenY$@g4$Y*d)P{%!Bb7<3W"X3F HS&W;ۄZJy@%'se˴hV3Rw8Ri*e)\!4c1Ū\t@}[=h7WVK)0dmᵕnm| | !ckY.RmXn"8btCbQ bQ fb{}d*~ LA ը+ےLEG}{wyfj4>KgOַ++ g~g.oS7pm_8+r\n=U◩(`(^ :䥁!"RT'R]=6yT1l(*Ů+ŷ"cΦa9[5)*2ӭTE܋ r|u3gB"U)VOpJҭ6$}pO1Ys6BJw"N?И3B7hqG;t|e (Ugc!޳HD}jh& _0;;rY؄ N8(FU;6yw_:[A1 ,:E{m7ӗDZ:\,1Q37H/nypK*?juŽZNW= #yy.'2r (@ p4D`@r^L8h)߅7OsKp5:T[r.=Y 8^͇7R\`W2*d?}1|"dDԍw î>['1piݓba@|OVa)z+gm5;Z6ߛY:,*  };K6é}m#=?* 騬y:f_{[0rSߙeR{%eS4y1b(uXWqʛc h#F=@DSmTw v5S btYhX\8q L, '`CaGp&*+:[Eɇi!yz;*F6?{WڍF96vKw14 :eDzoF%/G;*)Qrm>._o>ޒZ ?:dcp+΄g{GWI`f][Bu0&Õ̆H|e$+ttHTWmtPI uP3UF~rqZ͖2N FWLedⵢ!Y1A(k ېjb%]]Zr`(#P y!Oa9eFPH"J[+y>Z61%el'g94%oGyXbAz>{`Fy5c9AE3Ֆ2Nwsa>2ѶwZ~㌲q5o_qa&Sh + <1g8+tcpLj;`Yˬ%$$T.NJ I>X^X[q[^ e ;wC a1g5PD` f52܆4u€u]28{ҡ%Rl䭫t<cu8@H6YvVȗ:k59u&S*#% |eUY Ֆ26ȑ}UĒᖞ^)\T;!F[Jh_sո*)c7W=Q35chp\q;˧ǢSaԒoqV`agޝ'``]Lwwլj̡oaZ[:N&u[sHeUlkh~ɇż\XHK彈Pe Fk>JpFQ5;w05BvH p}*+%dh(XA:(r":j4>htPVX$l$E"g@y8pVٜfTҥsxs\9k!4qK?=ЩJI9^ަth6 9$p <"p,EJȩ }OhBx-WɞZ$y'@tЯ==M3QJZu-J*Bm_8;As~g얲@UkT/eX@'Eiy<:&*RGFZ\wwhmKashr"i.KReibR{hT&Dx$v1)9Mey`3*-pAI 'k}c8scDN,bpKo5gi@CԳg LBHTgC?U!zI%tS[Wh$RFc%)A_V* r1٩r7ߓ1Ůf?_L=~~qǷ_+gO1!mOR﯇ !|:22|yN]["A{C=JȅΧH )R|lb|ܞo }|x?wCP} <|az~Hyi؋4/s)g݄΅vZMllxRZؒ_\L~yŎ|g c/sD|F3q)M~J=u />i]}s*J+g%{ZGkˡRֳ @)\Z%éN|-> 8e|/Ip9.nF8 Oy܈V{&JDp=!Y ˀs+Hc t8t K?=F\5f^5^]Ҭu%Ud^+tZơ\xlAxf3j WL0lrjZP'е֝XM{kTG$KydTӤ[C%e!eti6bFY.NePRL+oF$[j &2r% p.J FwccdpFsWBFQ&mK}'5@7\v+JS]W vfؕ M'4fרZ/87X6sF#8.JZ:b&nV՞ BfQH(5aF2`#i4D;)!]4YSnST*2Ɉ8+,2}4vN;iڲ<>,2fIDn͗߼'ݔj 5l2E#O_介llQuI. 3SO$V&Oa$9׾xz 6>;~A{1>^t!Cybd8Ơ`tKmZ .ݧRZYH2I1!I\NbppH8h]&CBqَԓ N\Yֺbڡ3QD?ܙl{B 9|F8HwTb&0%$8'1mFE>e `Ъ:6 KE ɸ/ Ȟ^ (rF/C@q9TE@p,Il!$6[F^$gE]A:: -#yV6?=:7TzfTӨ h V_>:e"%;_ )Fu ۆ:L!0b_p/2;E[4>K^_Oa"LcZIu#^['u>ם[?uP A]#38:/krx+fR^ܥQb6Tnm$is}{j$昺媳{k\{pV{Dit:m*fk]ec3(Kd `(d Ѻ$EC9qB(D# J+XAT  # &7s|u]j-<0*c4秕\H VT7 qi)Q ysM6Y@YP\Ō:RݺpB+K3}ՓnHVɯV* LF|]KMPUob>ɋ,e&r{ı1} ]{?LIFfӻC`Qu2D-]ȧP!? ]k_z?R ;IC:tKH4j[Ν.6\jVb>5N$7RK:1iQ&K\TuI%cT{fZ+Ah)n(Eä#ƻ, APLY$jqM*8JsRՇ#s'z8b6&>hAvbb# 'L'%v6X;r\@nS'AկEccW@)3$äfs-'eN1\ Wq71UteVRtȩr2O5%ugäam8[!EPMk>gb3E"~6&С`l9lC]Q2ҭg3/{ONg`/[q&+m_aєyaxu7}zg3Q]H euϖA y]ފТ{Ϣ9 S3 g3xF3g\ ySYr1m|pC]]qH DQfTj>ї&٭۷Gi*`_ UbW%׿׆H ACEho)!7@ӱn=e)벋#^NY=Z v~u 50^8E\R3 !{Q细G38XuDjj)-\E̵o0^>wt'GEmyՖ2^U@;j*.*!ה-B{=BA(6j_cUv ]-m3#fm?g?=:12 FW܌ؾ=HN^־`9d6_dwv|~?>|[$nx:yq>^޸Ѣ|BW/ewh&Gꋫ$PLu%z[b( *p *zM=Ee8BKtk@= hʄ&mqFTtP%RbUH;L}r& D2j'^j) :ECʹmY[˓x![A|r_}Z>/iY~es7W[&Ekn=$Hȉ8Ux)IF=S)8M &[שq==M97zC'W3pQwG{#ߝ>(r@wz0A!‐$g= m+[}r̠u甓 95DKgRč$<H#bvi簸`24(ݙWܶlmʔdyc HnY,>U,S):GMPrX*Tr,w1LfdHi{Ʀ]>dH"9H'JnZF*Z- 5A Qj g= )tFݻRzʿR?׫K%׸ij[tuxVI_41m+4MyB9ՊJ ?Z\Yhxlc(Fe˳Rdekŏo۴V2Jk)*˰J!g%`{4uc k'nǵb\n.f 5ץkls2wʤWxJ.aFNFLh62D&#ʬb[jc+Sn@0,Ѱ"!`Db)AXpt^)xPf 7ʗ D*( Ug?lm!!(? G͡˰XI#Cl;{eڌwo ԂV.g$wϿJ yO}\g BbxU{cuYbD[ô-T]ͺu9ȃz{&-bA7)m5"g5"z4Çc3]f\?q`>s\ͬT'R̯\AXrMM:qyM3U .כC[(Pwޛ@? {>&fa?- W?7_ [~~.z닃⠷8m8 ҚJliJ2hN552(q2hgEʨzߟ{zp3MT㢭dTFX2bQ"i$"xG:S:+d˩L67QBzx^"hI[{.U- sPp>Wcʔa=O?af y8c:tuZZ>Թ6Zod{#\a3Iv@UR҉C`_iлKs]CQ?4zS ECZYGDDbhCG. O~+O\[=Tk}Q#=#=*oFnژa&qUp8s <=n<dfLEPYʄpJ„u3UZ ) mTo@ux4Q#9O0UB^ͬʜH$rn$M-srkN4($`~v+yMXƕuifaA8f-8 Vؔ e)1&'. Vg!4 0 xYF&Sn`SRM'1$L% eE DPBun %ҁ~-=/XLeyp^"*DJ7_og._2g2/!M3|,< ~{=t}NiP>aĈ-|O?mc^ڿxX BL':KiGrC->nMX7nQ6q-/BjRˋo~S,G.}-RBY]晐XIf4ADKZQDdK)R26^l#eBڜ&'WBJ'r~mgBl qމ}|-e3V +ka! A3W_{3433wKFc~i|n}> \Mͻan2ywBE]TTݶ8Xt)2`!֤2 3J(RsU6:vJvC:QW*TTښm@Ea~Q\E*PW4։g.N7Rp.'}"J)Mz EWçu:֡uف]uPܨ:zu(UI;T1烾(/B%TTz5Soz :1}-krVdi{?grݠVz#ytN!j+q 68zb#Ѩ懐 7;^jD wRTS: !yP6qvyo|y|_ϓ!ioȍ.ӐEm5Gd2Kk=N5Kvx 8\vpُAZbDKy7%'fs8zIppɝS|uGl1o}::KMR&G$:C)T._!.m1@5"LNOSߙ^X곙]Mղ+aDyl;ip_|;Oݳdwg:iD (8ˬ`.ϐC`6dH(ui%Mo[ V'͢inf ^fZ7LayW<.pX.#FXf41)ʂ"f^1w- fՕ&7XYĉ;FKTsˑ8 RgpSE(f$ބtO:$DR'8L*e|> (Ŗz&*MXdHfKRNRp2vQ2Rt>pBq=zD ;sJ`qLSLi6X8Dm&g 6D#epL_J(jUE?Lll8Eo#J SG2\0G;QrW̙0nkNi3[~߲N:~JasqM^i )%W!7o?Թ0p1__`_6?z1BxS >f8Ƴb9q{-;BBh8_{pW s(?1ˢ11) =l}C`j8.1;V>ym!wG_a˱ârO/} -"& O!n@9g'uR⥙=\` Pi1]zQ TNPn%K3:Qdis ?T[df|JIA,j%2BCo-\ SFpkBX[zo=hmC d+9 a}Ɛ~t9F{3Ro7j'ݗgDn#@HdzWzoek ނ'lvlWvKElW=c[K[xkL3aNRcK0ȌJђ AeE = \Qq BW+ƾ~sc_7} c׬DSucNsp-x:c7W(JT'7ddV eV3A>nVnTRTn1*MhDgU T))2q:IXTNapBR8Dž9-)Y.-5C56N V Jc)R NSXF95RZ;JN R]RxqވNj -m4Tdiպ87ث;W`/sJeL#f'VPüXJbEDuBM&o(3@uʤ LMP b7ц"P 3J8V&BPD/3\ uYZN_㰸 Tܭ~WD8hp:95i.Ah7;NFZ# ղn{oz;"tDBL'U`ţkBUfRẚ~ݭ9֍w-_8t@t,"HiFP#,(>`8^jH)4%u^R K$v?^jM%WJٖQFfyv4/J"-/ &<]gg:g0 .ǵ~ӧI CtᏲh >_omb_{?mwy:u#e<_x딲d?*XU֕/fF Wf|ϯ kH}\XOaJH-J,n6d!߸Fgv.틜лbb:mn'\ӀNuw->;$л5a!߸FG~8-Ӊmw;a:-U jۻ'tz&,7(*Uks/-&5:(fw*8'4:8b')%N  Ir.qB#A"}q,l]KH s.d'\FG,3Mtҗ8'40;vq'\fGP 8ё>8Ao]K=$JjX75m_IRe~_; MɞVb}m[2l$ii>>ޛsWpP1VL_ QHպxKZ^{w_kɕKo}x㻇_/:T.uzu=RL;BZ\ʧ1j`]p`%8Qz)o5^Z#@/e=Q*VS^vKw ?̵3HoY2JR>.P(ku~uq-̦w+8q!FQ6bQBS:ʲ,IHehJB^+kg@ kHdRE?5- $%pwA|EK<|]J@K($W;ķɹDp {Eڡ?i:\&JL˵m<08?Z>8Ekmfzx̮&gj{㸑_͘/UEJao%͎U$G}Y,9K-͌D+g/~ŧUEXOz1]Nzg۽ڤkmXDDĬS@݈PFl5D~V3/!& z/;6|ZtMUֲYjUV [jɊ YBY͍ѕawԪZJ3N"hl4joFgQj3#}c@X2s`ܨ ) eŵR"ȣ^Wɇ hB'c}DԫIIgyʷn=ÿIFmb(# xq_Wu06Wrs_V_Ew.w]w}CyVlU +X*WS Jx)VZ5&[բж47oξp6l Лmlt3Ũۛ=786[K;<`ް$[BcxX#WJ~$ @L4LЊ7%[WIje%r5!Z$:8͇+-֦ZC ~Zq)Hפ.>W+/.D\6b窩W;V7ՀYdX=q0`dvv5BǺ{7Sf)t2o~ib_Vp܇J>YҤπVAqF1Z)+W-KZɍa,SwA> )K-ORIh-3q> *C/?=$U(͂(%EŐQE|&(,*k-QVTHzi(,z)u)/;(yꚯ:`'jx wDȵPL^1PhFZtQn /6.hMdZ2BB4ƠT 5ڃEUѶ/ %Рۡd*јӫFkߴySQ+ +`LJV6adU ˈVeF,J1LI u"z Dp g% g irJ kTYTD$:ɍj -XZ-A=@+8cg߾tW7\!A!YI #CٷlmH7%k{uuYFx&7{y׃Knf\o3w3kie#,i#- ++>koE@[QAeFxgWτUHwBzw?ҫseԝS'N2XG//:8-c=8a c%;HTH28L6bjBk3*\RZos:6E6Fn;fFiV90JI{>k-ddTY޽/  b`dunO9Q! Q(9cZbfmt]w>ۭYo$7\5AKA0?#90QJP*5SPQԒBsм6a z}Y= (vpTR]^?` Kn tVO; S $kϤXK>IO#c<¤;Ue`o6L]'K`T 3=%ފ.-?nZGL}b3v,Z4f+iAH׫Dl-N j5Z$Ν-q1f.o(qc1KbEX'>@x?.') HSHFontnM\g|g_|F?noŧmֲ0mvuںBf#Tԗ]M_>n:`!?=:PU,qf 2guuyvyl*.|[﫛G?=_4Q3Ӥ (5Yx{8D<9amgq{YN+ x%n›NGF}{=6xzI=vpQ[%-Nvϣ{=2fPfRk,û]|Y?>RR[ O@8, V#`2t(Px=.|z$ub)"jy-rb8NC?ỿos%CNb$V2}#X>|VwMu 3:j~^m(EFأ{%l[Sc[IH) HO!%ffIF9IzTm$P"IIT*Z:JD~Q+Ri76DZj};lS_tz/ qK֛EE`}Y?P1<>wv4*ė$'Y)16iK6QM@,Ջ3<\lC=Q[TW@,{R+ 24+x (%(f!+Qu ^-Wؚnh Je4XKcbIعו!N~˚%<2[%T" q&̖"k"CbZ"v"M(CLC!Sd"V3YP>FcYD2]x2CTCM/GI *@i30H^*8p XYzۢ6~8 }s\?G},ɬPݼyz.~'RJL3]Yʂv BS HOAAӦ \EJ3CYmcPV F̪T(\k[c2 qURV֨Vu/04$4BBKjU5Pd:sՐ`H &:ĝ,׮mZꥉpFLZ7k2 j \~k5XW᧭B(pbZR/A@N.I({D2FJhR{^P %Q/H;U?-(oV*al!(*1JT*er"!.UU$<#LMI!:&O[n4ru% @Sce9J9)g1Y , A@b>ym2UӳbDՔQ-kiSS٘h9T̨Ixr27 |#GTg*f1eіBlϡ)8OE;\dᤘ<݃r֭I˱< ~:ZVJ\ƾU6ELFm,ZO_jf##<J^p-It<dȈzACEu3{TЉHK #S1]3րvŠiquR-~QfSl0zo,V`-Ft/3tr6񙐠˅5<1Z*^P,^'Oʰ]2)GR+3v\CZg&`?; UU+ъ*t%V*oVLWOTŢNOp<$0$ޣey@O탹<҆h$<%3%#ҔV{n/~<{J' c%9nm9,VtٺYN4`buJ9uJ$S˞J1(N-IiJ Ђ.BT.JCsZ[%JN:մ`Z2V)^hkfĶ, CxVD gʐb+)m5c[Vj#}@KA˶ nڷUv"QfD(fr8CVKP ^#w,Siu& p*Q靮k덅VzCլaX묔 W5cJ?*q`Qj+jmC`2V!} nzdI" --ֈROXlXZs\7FhWA[mmlo (2g+Z+5%FN6 pD8-B(dp0-a'd66u0 FLJq:9|r˜/ I5!EBTrB}5oI&'FjWogdie tXǴ.'6Лv}j ;+t.}0.?6w-`LuY KU1aW,GY{] ۻER[y&-qoюI3jt4Dv^LJe{%Jm7JۛJN%S(9gBjT8s%ti$SBEwjjjHCiW:ҫ77(1|Fy٣XG7]G(pI>q־róotwQ# &~΍ҾWN4zF\ xXT^- *Cu?{%ԹZ?#uoى^ZeOcFt%c7?:TZɁh?Vc+ !h?}J=vņ4h7+ +0vZƏh+`0T+%w[{FL\ @ӎw*Y(/Z ߣoPV,>c+dE\Q؋4?&/|bI??$$$ӤN#R~q kkF?\`}I} Eֆu'=&b3ZN3 Zc_Mړ-4"B'"L2ug6f]Zb-s<"v )- CiĖJۣ\ S.{.˧JEJ*Ypl%Mu Q=XA ;stƲF;sT6ِ8LuVuJ,Ԍi @+:së='+t_*8XL`PHG=*Q( "VRpZ8K2&rB7SDLg4)\ 'sK[W0Ԛ}& ?/ECFYW3JaoWK:JuW5$HI yUh 4X˸jKs Dž&WZg xAQԒhI ;,jV@ -Q /[ddɂ:Q*Dɂj=ٖʻ{\z!EݲzELҫmoўDymls/>N~ZȉS u}Jzu8$\zmnM861&6GEϭ29س!} NJ }/QspuX\в: gHEdrhK'dRABwINNy!F =2rN;g+ZZbDAS |ë" TH-_T)pg§<5 OSs&RΝV)('lN+<3s !2IuzM, 8I&J #ϼX3ӑh qբV{.s"lJrC,˥# + є{ɤTN)1E"ǟ%͵1:PYs5pҼ-$β?'{_lS=r=I|w{E z=~/>qzq;|PמIH׏ǣ|z?_<1~ IgDek;_|3M_s>D}G7W_ CD/|>EA']]Mؘ b&Q }c<*pT9oBjGI º\;L)Ĥ9OZ8.:36 /AT0?ġiU'yfd$_5 2B[ݑXFmCnAmϫ: Rx ϭ4Ϋe.ּD8 MyJNI:KtZ4Ƒ6bșqYEΛ A4!_ĽG)9bB4mDQPzvA"ao0נ-L,wvKT$ؐnJ ]YWl!jЅxcM*T;p%D$fҽ}YH o6F01~1XK=`^p;Pw'9[8V EgDE)<$Q@ 0;8ᔄ?>0@ hIpJWRPISTWbʨ'S&5-Gx~J' dT0 JiKǣFqT@ 4!ٿ>ȭA-7gц!1gLpƏSBL;Cv,|/-֘1FqWiy| 'U&n]؜x3z,|va&K8njFFdž9wEo$# ;rB_W(CeA[&j3jDvghιw 6r`hsḖ{ܔ&!]%t& / o`>pJ%ݕAa:J/46WgQEVfΉJ=p+*]j;.O~"E2p!ĩ$I&b]Rb΁Frq? [ݣ?ݢD~#jNO.tWaDU 7xx%kٶp :24G b!3X!{-sosqӆau!UG][P ӠJxt]wgqҌv\qhpL2@Ygm#X 5'/dJ[1idG{gv're{3wAHW٣gϵWi+g7u?i+VgM8oG^4|*$-ulrkp6p^⚴߆Z= ~SR:#UZ֦U<$]_}6A U /D-{@y*"t]d_F7?[;IFIx& ^̯1I|sO)7yr'!7yr'mNB`JR09{"cDÐT)dO&!dO&* tzUp\!$JKg4G٤23,#Z5aY*֝*֝|g j̋T?vסW(hp*Ќ~S'Qy]:IPt )H )O,KXTTlz+j O˭#y+JD$BWffI=yT\?EiG4@U)D'K=Bgɻ ==9d`ҷOaC:/ !ROt:!Dt; zA-=.ѢTFKIyIyCʗ4&STKZrQPJ]uj= qYG?ҏ>CG?NopgcoɄ@r5 ߏ+/95t5O!),smo>l7цB" iQ[Oу>aP J).͊-HrAԒ[a9%^[/!Cg.ՎmM.'>G.Wuٻ6#W|YOxA}(Op^)4L0j#djl@5@CcC/̬B@<~iH9@gy{}$;ĬW*f2R ;:)| m%hiLQWʃuif~],3`ŘcB`X7;`W!80qϐ̡)'j#c0v!XEJ5GR/u` %82[j-9\3<6L``$Y-`ƒJۘk@鍲6]<{VdHRLIR.AjHiq. <HmZXRDxRLDe_Eu(\q2e/\qH!H.zgbhp)Of50e9)a2Lʠ20|\k60zdkfYaT(n^6_ft[0u~;I8>+2>즷WW^m>k O;},g +ⶩZB*!R S/qJ|#Yd9Ѥ}R]}P"ψ q VZBb BlFvVb+H֢PFxn{]!S,3yALMTr1,ʋ+b<ͼ1SX"CL 0.+(5Ag '` Z7K!<,`"3p  d`XfD&5VA.'gL Nͨ$D$,Lcƅ4P,RC(%7+ypA5GՂb \`$N$D-%1$uM1VI2#h$2 IUmA[[R4=@bwܒ29{"+3ŝY-&--z! [ۆuv=vh蛅U֣p!mINߣ  ;*h_lfu뷍Um4s0Ex ]ToPiLv#,tiRfV]Yf3ͯ!6StVkrM9eҌ|ݸ{x+x,WJwbG@Qﺈu/nQp* $G0K͝9=! CLYE`<ʦv'\7GyJ;*yBNeݍD ELivv{J3ƾVc6Wfb+3ۄ6ŕQXnXbu1U]Xa.LMo'0;d.4/T5=wqZjWKuqBXH&šir^$YU#mD.x퀓xPi۔ֻwff=a zIWq&ADs;@ $'שdjM,=5טW)"])AAlI7fMwVCqćbVˊ+ɴwu|t%gwosI(& ,Zv%!U8_`i]sʄ*lٟJnhDѺuSbWgibZ6a$0wS'V0OaGfQ]Ԩs\JR)\;iB送IN^ KD^q0}5QR:ࣙdWwj9U\dIW˱O|O &-(Å NTYGZ$0)%R8Av)) VD Y+1g &N4 -PC7B@Vj@zSN*b/5߭%DQ^R^@T"x[DiuKII7ot5nX^Urm7~[{Y@NE;ZUoP煉|w7ŧ\&b'/9Dcj.-sܧɇ`݄P!9*(a hr r5gJBCb XWR 4R;jSx4B k5߻{Ncu^ -RpmVff)o\1w!Oz)jԣK1,ՕOT$%GCĖ"<@];**37}L>LwAcĶL3 [F#)wW^m5FLԄ&v{?ݛcX q8Δ&MgmevN.۞V:ѣ=d.^q4i{єS9 $JTڽQ/Tn!Yx1jdN!0sH!#YZofQ[J`.Zif*몙Jgwơϝʚ- #=N18t:{-O.QH-5":=!VgcJ0N«ղ^\],㱶bd 3R2&2Z5cT )|jD"FˣoGѿ?,2,7GǾ,[!OўSَ~l O;ÄJ :/J[lfM2yNh.2Ka`Rcg!()뼵B.vPs$~f6{~7]1z4"BO#/ +BIɮVi#QF?S?2>zd>ۼf~h5C:Jm-q>C qܛǫ+EZL{|7`}4<̶ˇ 2']<:ۭ>ݳMºb!C.V'|üKe= Vٖ^tjDZ8|C0oN~[sS˘[@88FDݐDKiW&NM]V"o YOPc\HM[2\[uwI0akqnWX}޻mX ڪ5c[:&b=|2)aZVo QKAdut0PDByyƴTьJBt~Lcƅ(k`&(sVSfE vs6R`䑵:YfwAyYʝ= YCll` eCۂGKfXI~ƒ  8AGKhwf7%EDS.a0}fN~4W^-IrKD AcVf-8,Ek 4}eW$•ٴǷ2bHw}$_:p*KxA*&a>@Ŝe>1Zc NnNTd [Uݘ#me헴oy%/xK-`M Ɇ{g`9$(pmsNsXnszܢG:KGtwtBu[*X|̬"oj%m=WWݬgj{#Ǎ_1E_Opa!eYf7+vܔ('VK"YTop8@;!A(hV鰛73B˨ Y&}5* )9z'q+Gk (?ʺ BcQޓ힬DϘ7r!t/e QţBKWv&CFZLqt ZIdK Z: )gP KP+R6Ӳ}\OZ aRkf'FnŘݻqK,-F56\J- 5;o v3"hy *Z#b"ؚi0o aa Jy%1|)| >9C/A3 sp2,N.m)?i> חEòX"%'ߣ]z HVEzQ(xivhھo|_pSp}M+.O>|8݅_NO[-CǤVy=kkP?4,ҟo7˱6 ՛uV}`syzӫ|/tm=9'UwfFDl wj ޭ)}FvH|ޭ"ӻa!߹ٔGU <6Ϗ1Uv:ERU愷9x$(ڄ)*=[qq%#d"@ ByپB)-XY#}hm:!?8K!8uSGd, tZ0͇{,7ezN#i{-T 36Y77{6ӧve];Kk3ڽ5:{-NEwAݾ^Ѕ1Z #]J=i:6X607j/)+5/3 r%R߅Βi%:Ku6vuQ8Gk:)Q6x3Hx-> K[D.UYWrK*d$8EuHNFJ ׉e"/D WxQuĮçsP'xcO&u~C~Y''O)Uϙɓ~OsON/Kd۶Tab̴7UTŸn~_0I1P-HkȬy%3'_#oll;Y0Z*[ \C A *-k j?0mΤM#BU`@9+j\Dtj5u^yY7|&Z`S9@!=)`H_3:%&ی̺98ڂlES5G~{GW$w5E~<'r/ܞ)xN9ȶ"Ss+Y<}OLXYb,݂ivσE3"dDԬdL r}|.-Ԫau9[ty-#AwP7:׎K$ =ZOgav1ֳh^JkG˅_- \L\6;>CKx/n*4lLqz|MtL P+YzUE+.cȀBU:ggum q_Jyk⠄˺!aVS@":+'Ye̯Lb + [̌ Jkf3ƕ"k VGiR\+O؄`6:Bw6ATGBV0" `C7eKT5k..Zx?>ZpKH@* '}8~fK@*2ưDl }wS\/bc:9 ՚n/ưD۔`T枳ؽі6Pە_1΄{xH#ɐG&I"-<߼EYwՍ^K[Veik9u7MYq"]LB$OpH}!Y:>kR q22aa>0űrVqG-ЦyUV.f>M0U.$zV{uTO NM2) :`Cc2LR']6N?S~xwDs"]{/~!,{ (>z:BIZ eB!cZY[[V&~=ZudߥW-[9RvAʠȠDaQ(Lʃ i,=VLy }#/U%U&^S͗ӂnߎCvWyaF~!a:XeFUpz!W5n4Ѯ͸mܧ3YK:,%- {`roAMg,[ LmDOEf1% Z!:dgό}T`JΏfv&fW(Ce18x?rA8FXKcGEp. iֻ'sï1=JZySH`e%7 ھ>b koS6`8- I U&*`Ju%EEEnj?6oA@&Mnlօߥmӵ"DtCNlG߆)wp rf+Ti{T U3sf7ͺ|PzJ fWB/%YVB:_%^< -Z%l=MJ \JlL@oE)s|0^umjmLk6Sz< ^? \!d#+tԂc'rBZtUOO1Fk[Rk!AXA]3.nN%v|ʤ]Bs-)yl˭JC]\*s vp5uaނfNu /dժu $rr^'' ,B`oW;Օd%8PUQkq>үjp=A4dŔ]vAs3?sh8sAJCfLͼ:oo/wIS ~43Aǥ%M+dJ c܈B~'S>6R`rjշɼS IކCOE`ˁKKI eU( z2շi}H8*gy"sv*6ŽwNsXF6xC}(0z%+)[n*ֵӨ[rVŪԜ_9NEF8Qa:S1+C]6j`1TЂs{l|uBA5F+i5x%"/P!aiVpUʫ1Q5X/UԿ|{{;dN'R" 503V c ( }IJ8"Vt _ok&}7U2MԎKN[P*T$SJK5*ϊ%lrKv+dĿ4Fist=_ɿSI W)H.n5U~lHX$ ĮڞtIu9K-[T$?56't)]]+и`ȃ s%r=$Ԃ!j ,ly/ܡ[Ҵfv0aEHtcY-o%PY`G4p rQ汧 ੺s.n?޳J )`DWaS}~IFnt_c>(ZL\yrI:- 7<ޮ],L}NVhxyPI)wr'}֍i¢kR ]l^xnQJIk\_J@ONl^*4h%^RMρM1Ʈ) iWYTi Ky{a 7^Y'ߥ}T3\V*ܳts#zJAVhzJ98Ir+4y$h4uPHt;5"WZ0YjT 9'*N LI8I+z/n!шEA=dRm}g%avGda:hOZ2O*TC ()i%8Wr~T2HQ4(wM76Vف_"=avl)''aebZ! ۏ=I9S"H8q o'7YHYtqN4?> '?&p?b/m/&w Y nhP,/j!edl739lng]Ek[+d^]O{\uC7.LL I|X2f2oUve\2N+ 6 ."zzKVoԪP ʡT% tfԄ߫/z/g"I"} -H xyz $o9=]1Co03Dpz|v5=<%%_td j"?9QL>Qjs -HD}4Ju沄1<~}[W?DeDLS,]{#] DfAY\Ia^XvHo`f/< Nc^ɜCJՖZM)YH"Y}ٟSmFݭCRdypwkfpችEdh4\Ab!&) "N(75B֊Bb- JNkkt桵Vt;YN^P^2yIV#8`.`J=R?kՒ3(CAsjvxM2X$?md{C7O*f=ql͉LpAo.''t8\8\?9zz]wY2p[)JfTw= /G2[BM?| ]V_(q6v&Xf"/SdEb虺*j{JFZ05Yۍ\0NdoY= H>LN5P@\e# jXPY߯2 2" @:Ǟ:~ 2[%DDb.|*/~gAJ){`MYSĄoeCw~AV!6nGefxd:<< -#9V:;XXQ&btңAXRߢb{YoAmHPEEWDRZu`"w<@@y@ -3vk0  e<0@_-9 ?`~H߽q4|=WTd/wS)$`Ҟ3|h" >/"%ޑXfm$TR*ſhpm4NQ~O?[,V]|qmZE.δ)/k[mo;J0"d+hSm@CKR"RBJVucR/}~^aq~E߼WmW OSzzsxL~f+)RIT.tS)@ܛa Uf uQ)s˫>Rr}*e#R lBtO1PyŬD̪iӺjmYLc0}eB]HejU WQj"ЩPkK#3{bsڋ>k%~)̀A9s2NE)jjբjU޴u'[&Pp5w6n! ҫmrZhqft)JSvuMlik&*U;hͅLwڍ-ZO 63UiAJP)piL$DT2]X h4]9( 1G@f%:uH0RʴsȷؕxZ=E;"C@RqbWTy4TmEkB4%fnT툑ze:ف=O:誩--̗_umsyE4 躶E:U(U 9֪f*Vu]u!ڍ4Ue'Jͺ՝婪*IKp8ɳ{eH'(V%N~2$5}rxoad3p}hPv> B>?3Xz3uR"Hk29 [8* =Eǡrovq$a e#6_w !jUZ4_a;I"mo:3  ap֚{*o txW>|T]VDtdabo^e; U '?[~%R:((qxފ=2ßOJġWJ@Pb95ppP f /[f=M^Zm׶^xo}9>jy׶B Rw]Sl*D4l8R6M+7H+e#Zg9$7[C96cuf1elbcuPDKM0de]dSR22@\ImYc' @o3 ;7?'A'I"vsb(X)%pM ts^v5wf=˺]ryvp٦ݪ6㱼-gI 5f2)?~0Ylb]׏@8iV\كdprJ^Hs!ޒoE}o5#Q,E Un8!0_:S{PJ.NzGZׂv(*ejQvLMi#8_2ղFv-vXҲA0+TE5ͦ$*j?2}>ٻ8!> B&A3]Vg`z&xRoWux:PV z%2CZ*zB0f'T]DR^kIW%%ZNs*҂䕨a`L1BP 5vXBq2.qB=ӓ~֝.p8""T}`\3yxY @%L)$YM0Ia|ɉtԔ)sof\ :<?eNΫV![Am+lr3oA`PJF9(Y֍HRPR3y5̓[{w}:t$<5-8G{<9~)*=Ro80'g⏷ږዀ7#?:"X>md:uS柟[ϓcO$]fJ@^ߵwl`n?f(Z.o~z*]W@/Sf.W.vc&tNuCdஷS<%ܞ.NKR)>'IӑRxa:!lw 4wZ|l,3_kr2.Wk# Pjƿ4s_˿<<~?mdܾvu(_QW-^'~In1?vT6]簴x;'+Kj4@ +"K$xa'ZEB _B>y>=MB)Ȉt|T>l%9vt8G)Ⱥ'Y"-UNcsQNbj%΂e€>n"KLQ`9#} Un\- Ǧ;S ش=ifT[} [}&VWtn x!Sx=~1x:xQ9|\[29<dV-?yˑ@d^ OrOB?ף\?DҒ7tۖŷ 6 x(>Çۛ?'ϞHt=g ~쯞 ~©@TLC>*F=~=*"%fœv"vQA>T O# JTdZ4f!DC"_(R&FI46d}RxSh2YDY8`f+B ?/҉]!k]UeKY?GzlwNU0$rN\4? f=rۣnn)e4o:=<-sjm"(כqxqTz4 {#̰͈鬆u.KMI&rfcWHrPLDw_QD[sOł_6P?ZmvK,ewV[#`g(kAtYV0-3$:ۙ( %y.@󳖝r͈y)!ΆLjf6TR溗b*kS۲#_ᅲ+WN0W02CHnp|Y+ժN6\nkn1 -(є9B<Fd1ID7'\׷P}K>9!Cn87CD-%.w{tG2ᇇ{$Ʒqg:8lF(:  HT=2QΑ=\hFLr(FN -4Q0_5g͙&*DQQ8T+N:8A҈:8dJ{9_!%1>(@G`F:ܥCHzc39<598Zmb$rꧪ!7_ɳj$afSngfl"(z ~\JsWI}Wiy B0;&WС/QtDZsA(:bEZgq_/qnpG^Olb:A!PͯTJ XKWy3^@B1d>2,R.2'YThT)[A X~ ׷R궻D9ld-Zz5{%l囘DLC!B `8A'OMU`@RSbq[i%o/sD+BJz`TNp( 9>ﱆT EZ΋_U//~ _j|'x]Μxg@G+ EAktpiT #kC?USIQ\%l<$R5:'| Ld/FN{كƝ VBСAbϞrҤAqghsOϋ_Κw W )A-d)%yiI“w[0Y补 #S5_SUDbq&X^ǹedet(saM NN-8{ "N쫊w[HJE'o." 2'ŵsDUlʹ K PBA ЛuY&H1)siX ,xV\ZsS q5a.PpR8!Dy&l -V-,r.YC \ ߍGq Fc0/ͅ'Í'e֐Zhu+R/yˬ$A=>QSb,bU"[F.4p'Pg&JPD6JJ:LzbNS׈@7&2\7QK$*j)n?/V: Y@LhC=6&(IpqVCD &hiJg }} (~|XzPWy$Z!7ǥ f#^݇'q} "22/7ϯƣ8{N'7.-F]z{nps'w'ڐ²$/rt1~%;)w1F1L: ?, (3?CBB&1.]ÛA!sR# t[ȓ*)F]GXT˓,ÈNԠ#`l6cQLE/2-7ԒU}*arB=a4jɕ9낋Q}5ܪ(g;U %G{bjh1D$`K'pL/.%ZJCkx@ґړϿ@WL蓖{YEk x&D0.o=մ-J詖DE9MEfvmfό;i>W=[gȽYR W\LGI|h#9"*]TQ!L Ef޸0Lvlm}"xh&3Bmd#m6 vBq |=! C8xs%|̻zhB+lx &jEkRciW8A""tk=߭:eYG&ktKs7|=XtxXw_U {*zcXPtNvw_p.@ih'tA 04AEBbEUIuP0VBh]"J8z ĝȔtEnO -e}jZ\⼊78%rSs˷+YUu/>ٖ|AV S9'j"w rMQ/v}ӥ] ܮ˅ 9 OetO00ϓ8fpG". |po+W .^+6S\+tqjmU Qdy#6{޵KC\Ѳ.Wx`,.hp[J"Z+>di:zi=ݫzO<&-lf}Z˽N;<*SjSLc^xUg'g)JV\ҝٲ0w+# FcV b <Ko^s#<7 sBŁJ) 11؈+UUOp#<-<͑DGV=U#N QvhHP `EP)({@j9"'3%pi,s-1UE1Y*h.SmO4aNްLA K<%8NI 傈MB[7ngyV|*j *Jkg/L)( GFa/,a* ͖[Rќ zI+y{=AV@Pe: eX'EO7}"Tr+'2<%`9IPmqIt>N>D!@8+(]h@ 2DCsۣ֥+r =wJ8S*5'P}kKbIt|^zԮZUjz% (t;AtV vة1+p| j}s;/֨FtӟfṷhM[פq5gD#F1)Qp8#)C3 1E@߲E;KTvg3ə!,(E kHm.G!@⸇.Wvi yCF\ L]X4t,F%6.8csdxb_{`$ X 1|!~e΄pQ`2@玑Si"M xNQY `F"~2BnQCe+jG椑MtJ=ޥVfWnhZ1YSgŔ(uC:T: K[-p]7*TiTWFC=1^Sw}O55:23"гZ@o{hYaF@8 oc (cR s) 8I4DylZS,>휾ܧ:ӱ,v W'8Er̢f((A^;ܖ.kKߓW ɤ`HM*1:ِ\ FzRN!˝p9s闔vХt9m~ŇG^&jvKc ҖE #nK]G=:zVb#' ꤾW⑜t%jc0ecէ:EȢaRՆ)W-c O<~YCܙԨLߢ̭Y4;/ Dޔ7vX@\e ӁG0x 3x㔊q(ʜ!v2ʶ X&Ld] Dpc-򌮗{X aTNxqu99Յ|e*}?fjVJKYEWgo^e^Ma\ᒬ^[QA_-ck?{^~Zc+yC [\z5ܖZ0@QPANw3PՍ:RD]XWnW %ꘓ#$wIp\^_rR[uQ,i9չY@=[2,@tNXJ g_(Pl!oY7Do|.b  F Ab*Ԫck;I9UιwsQ'j",K x= Q|̅ZoDSfhDc+x'.@gg`|4e\oֳP7$-+X޺׸) 8;-9RWMVB5}l<9 hzI'FSiQf[IOAYEFƿRjɣjL,XȀ F e࠸q":~"I xR-a,!(e(! Fgq)XiA!a$କ MP = sjI:'gRyB$Gޜx! N!"$Q{f)+2|(1Kڒ,fN* p̩@3hmǩ2f J[ GV_`GƼ)"(%SYVM0'(!7K0FH-ax Lp*Lqba89lPCq'awU79߬<.ߩjk-Wyhn5KD3@lGr&@WŝƈDh*n?w3 :[#FGŇѼUhM5T,zVPM{:`rHY:_糸gYtIyثqi҇7;^KIDH. ?H>~e킇->⣿M?DB>.~ۇˋ0=Won?jB׿sZp~n_ooI=-5@o_|~LOjŠ岚>K\r Y-$s>M )N7(?oϬ7Br-)ZjǻqFκ)}Fv³ȨSnn]XWn۔@V\;&b˫ߧ#f6UJ15-ՑgG0 ȞZ0Ḁq(f!-XmVcvtQe; WcTIy؍s #*d!~:f[oG I;g4B'8 c9E,OLñ+ \0tIox0-XvERȂt̲Q*Tt(.5ZKq["kQ5Z&h=^kȎcR=0ev"o 'UFB{ܟe(Z! 8Iok,ϾA*PҴ &黙Ӧ@긴x3NJڡ%VJ! Wp_^TK˹]sIlA xA(ak,ćrۻ3,_%V78YeiV)ȣ,𛋔Ƈ;p T NhQO@3f“ўïT1byZbd+_J(/4 jyh7n_썯V[+mSBCES2^?_gg-G{5``WaS%r`8e{@g@`Q'6/|5*uFĿ>m~$.c!{T?~Zx.iӹ ɐ$C-?ş}5GelLͭ2 Զu76Eo_ki'ADOgƠhJ>Ooy@MzvizVOzGԉn(Fs2RWbR*_IBA"Jm F/(H<"M9`.JxHJwƌ4nj6DLH8d$L93bN)f(ʂW+$@E<>w,ZL-MiQo[ChJs 8F,2BId4ajr cL17wpX A%4bhH{2G ֘tAnJGSRMB`i2CM JAR"m%n*!z".Mf笗fijKqX{밂q4!fyfMŎ5יs=*z;.isMzir/pEA/υڟ6 D}c1B, g#\)BhSb:HZɛWl.>CjجGM;ɸ݋oy/F'Zj4 ]ܨͫWIY'p6_Z#q,'0^輣֠+LpHUyw vYzc^=<c.3''%8j(k'+GW"+ e)ͲLY$̌ & # o' ČQZ܁F-!YoOb q1iMPw?EvJ+c,jgIK.jT<Ǚ">fd[uFcKmŔc+Ōx IعƒΦݻ+$y0cP}p9$"A Aq(p .Cn1h|Qw3<&FHp~rkN(TZX2(Z&sa/B\ܾai"nu<6d5DH Px/5QKtx[IQȘ @^d->/6R %$Y@(F,-ΐRsinۖF4* I 鰲)W.%r}-p6Im1z6?QcأZt2DB|֕ߒzXpNPB|o\0'ZxgV)y\Ȱ[{eVL Ѽ<7ӗ"9qDIgӸȫCKs:D?g (0![I!)UCTlPyBu<Ľ)ϱ,*..8$/G|_HFĐ76㕊{w]yryϭXΓXfT~r("I9ϴ苋-Ľ * x+n :&{8^'AxQgX(Yb%=eڊ%$$UՎR cFJ *TDZ &Jy TS,8qrXMcb,,Җ{ɕ,"!}{XapGR"lA!3dOQ4u4vI!Xŕhckm+GEbc^,cA.e"+Nd[|c*M$TU_U=T`vܑ1{Wf^[S~p]Xq;nŒ\mm< {00kδ>x^A1zq/?F˲Ztt;k5Xp^~znWFoourbw˳ZHB"ڄL#X׆,wZNa7vݷ?X$ukKM5nnt D@kƭ_}j Om+_'?\\|44w 13wA+^w/nX0d\knP9 ͘h{Z?^,GG3,r S, `ws8mJYק)yJf^wu[)hC!(ˋ7o34kIڵɜQMdZ9NRoSL:ˀ,1 /9B1zۡz7Q NVɀKkGz "*/"@edL6s%~{`f@YX(" PpO^TT !@mO4+xol XBU]/IrFEk]B Y S.C~"@ U- U+ RRSN흽mGp4y|qON0"XpF·.En-n%)~؀NDsKZQS/ЏlP h}xէƐgAiw}~bPhf"KtʒԑK&BUyY_9:{N;)\ggMUwn@Õ?43/QPNn>\XVmb`7Fl̒Bfg정8s֍'VvW),0H5Os5ro "DBI UX93CD1Rb:TK1DHO*&yMŬ,rΙÂWQΥԀ>B|Z=CYmHzN}m9RoΘ?nhO<a4'<2uyzN1tk B0lᾔ\\,rG*)Hc&;+17Xm0?]ސ[,u$dk1b"HY1}ޒᐙg'JBWt :LSCiJ#ȗ8/"I+CT2r -tЪw"6F5)}Y&tãEm2NN+d I!1ITŽF,~,t&]IбJf81Zn$LۄV⸏}()ijA-+FU׆)Ys.ycH& 7d Xo1âd=$u3(p(x"6rrs@6%(z<*H%;Osnx>:` FY0aHDJE0{%{< )D?8%l}b~"z+G(~e/o{J"OEXF "P%4H[Y JA^K* v6[kvpK ٗ/#1uomZ$և>3Rd/N~>"ײ"3hIУؐ{n;h:r=C_pr|. jN]\IPķcSao\)|;R>Z"80'NO'~q/&zp|(qHegyb5"%gnH3?ḍt#WNh!@9 \R3?dk.n|HpdgĂ].#`3o g$XOMyjN [䖙2B 5jAڲ+61r3 ´e<Z0_6fʀ&3әqV0?d&#G,%Č̫Ba==DjV|-5&T?0oj> ֞n@6ȋ 2B 5ߌ2_S9F&x&[Cdޘjѵ- Z[4xْB ,D ,'c7d;7y/(T"U7c[/C[?QAv3}&[lG)(uސ dKZ0_;NfޓH=xG#wbE[3d_~͵1{h\pX_jSCݴ>P$c mj+_Mt ߜ7.meDfO;q桌rLםV91odeF_& ~咖2o;Mfq4O#Su:ӟLa;eOȟ}!k^e/Lc9eC߭L*fZ[k}PsPÁ}̾ں"/ᔇHb_:ݔ{nb[x l?ʨtzQlZYx 3jfh7abK.ώ8};*ai7@|.l^"(^u:wu4 U̴J*>]l?7fL@gw{"q0L,||!Sᕕe酉&?CSrtݺua|N7gG}[1x97JPj ?19 -oè?WȚUhL[~TMJ2`bAVyhpqk=W"އzߥj*-so:\lg 5! QUZl}sSF_MM~}ZO`ԛϳ'/a4f+YxlWB\T뷷o:C'O腳W)buSsln^~ֹrVo4I2`)Rj|{N>~ԃ،B~Z_@ŔTQ@|nH듐jĪROF}„|^R{@לPqqxX7d&O50yS L>YfldD`T`jp6PMf[,h] =U(E*IדgfyP|GJ/\QT/7fU*/F'E_阩/WwɓWd!FvS#R97(cI|.)J^4jE#\Ys,m "Z(DpFw:.0vƉRN LT}aQGv$|br.{zKDeC^D2&AVRiFşIK&^ZȨ\Ԫ3QIuLP 4,@19y9ɾDTMx}xw-54i4p3M2*SR ?z]E礈$'w`mp 9dRZ'=AMܯ*jP9Yy]*vv*fv꠲4'[+kKxY3q-ڠuɂӍtL0n-ZLfeJggs5y!|;㹭oҌɷϾ6eIa8Vϝo6`dwΜNm KUB:lNE.\l\^Mӳ 8uN`|?۹xݽGʢH;-= ojd}\ >pxp?\f4jmy\m?99?nN>y89y~}t~}01:'ITu'Gˬ=_־Zݻl\y6U./? >hgs `J]n#]+d"2OJcW+Ϩ g9# r".gO3{KG:/٧VyY>xBY!D>Cd)Q^!| >iz+dF!߼}BY!t|6Cx8xzzҔ:iXa9"cucGdjsA!JrI`PjX ǘH4G,52,%0PwHIi,,C]`W}a:l&paT le(&K-Ht#~Ȑ%'KPXg c=Qog>TӋ*sMh՗ھFg>.D~=\|.^V+$s%3s"b~R@#{k$De}2\;br ʜKg7xkW{m=0lNy(  -1 {rpaȔ!L8a%Ay<UUa儱xL:J0$8CZEl}:_>|uS:G홷 )i T ̶fL]()خo+Z!'r$ΩFKa͞h3G`;F86CDB-)q Y!Y9~ëxT) 61 zI)&\fRg\3Q j5ahUӚ^K2m5.!'̳w]H B$|lnB'P9zKQpjKZ'A${IWZJBmb#H@+%L!-"I`L׷*-!WX HK9nM,܆_f h3ӭFUyNo㢔 i]Mp9W BB";0ŁGDKʫCmhdB1oT~p2Y*m}҄X?g`X&\YFBj!x?IBdˌ @(O:ݍgؿG!,qD:|ם|fy~Ǘϛzhcm,NDlsz";1vZb?CJ9)"]|<ŲD%h $[8ܙ:e?gN%9}'&NnqzΓtC YQ c7>kgE৶k^V7Sxe9|G{wzlݞRAٮ~1cЍj/ZB&j> +nNX)'LI$=as93'kA7Mh޵0qd2`ý8Zl|ln*NQ4J$k$',g S]ekFϻntD\ 6q>z2o: 5i ?ݍfg 4k3|Dm[)?ݍ{QAX ]*tovμփ{Ma[x=(k},N M59M(t͈ lM<0掿%~YF8B$`M~cYĊ;:H2x YLL<L[{.&o:"yx@wS:s*q$󴛖Zb>y3K 602]IE-Ya]SP`% *(p)v-hQ}FV>N%Yݝ,]d-YՋb @ia_oS464w\)ЦP#eQɩ &48A Zx'L  ,pxY}Nsr\DiL:J(ȕCAiF1XBLf3_J2*IqUI1K*Nh5 >TQl# zb cd"̛<2n#oЗBf1pJ$߇;̜M;D q4o,ؽAwzANI* Cy(hcp+V@HQ6& -ldH_V uʦ,8~heZ`r6i;Xp*$Hה0Y h̖B- AtdA-ssU+)]|jDmZn)|B61@ (մQz;I6:r"6aJiն66ZQ8꯲ ڌ]sC ϵZP^a2 I Z a11 (֤uubxa!nlxAߔ$ya8qiZ3Q"""a %(Y39(d6e#Ct_rG18py35h i lHgE 5/?x$4BV O$Fі* D2%#Gn$#D)cT@ %}E,m& Yjr;Ez4NƁOږ"6J=Zu % r,6HaIW`i-MwME !:(- 7„w< 2I0D6*XIJʭ@@M "\c20Q iz*i3BrlO<-mܘd þ~0JKOo r8nw__OiHN̰c@xu06U}DBHf7z&+`M7S87l gqv_R]XS}Lww([spuvUԱŎ}Vg=;9-?tJuVE$\Y[_nE9O sg4-E"5 }\qP,o JYGG:hJUuݟ;[]beh_ZH~R'd:r8ð̃)PY)Ht?ʢs2v\gTҬ=,I+:ey-&^z}qlʢ|gp/wn?imv;y[zAax~r1`sAx{pgy9)ǃJժ\|5w콼&x#xwX߫Ã%<:OmlWPW/vzjEœ52ܫX;ww |bMUyz^h)ZFZm\}tjm!6&6H{@UM^KjZVJTdͱ f]J f+Ojc\s+R&|w=6?uuh~|w}4{K0ݪ瓿.7/>ΰTvN:v;eB05 ϻs _8K練1S˳ՐuO7GqبZU獃ZLLQOAΟ ò1LRs[~6&-%aZ,&{ 7SԬMrHۉn}uNUoz PwއFߵuX}?tU|Ӈ2^k-9{s30ZzlĻ/`ԛכh;;j`gYqYn.$/{#O/Ii9 'Wc`DzxP]tNσ7 i.]6V?07F0р ^K]_<"5J&SUD݄^5Kp4Ff׫v@&{dBJ4l"$nϫL;X *4YU뮫bu?p.u4#gOKAY\)JuwR2FQ &hAUwʦzlTZg|ךݻO ^Z6-ҵB )ŗTwnOtSW#kODILcVa^9nnU-|c0>hӈeFIK*֖xmdk@,1U9 [BeD&ɫ\ A(10\yP Xf9AH:mF{ H" ((VE,7/`&p0t8q$hsaLHhBs̙Jm ~o1K@;,=LԃB+۹CK=,桥كB+eBKt8Vw~XR>DaIl5-vozh|'V0րVӜXD5!W 8!7bB$vZ ̈́Wm &y(JEn.]4 1`zAp%T 4h_*Њɉܙ0Ƅ pB i i i_rB%c i﹇ iKnOHcBcBڍ iW LH#DŽ6 j<&9!Mh=&=&=&}ppu&!m i]YG+Dq>hF>e=0hu$۲`od*y$Y˳c0EdddDd%z>BsDO^H;V[<-U'L4HKuj3 iCJ+w;sF`R]nQӏ ~;:1Td,bneww 1٧9垍Ȼ}VP;|!(&n綃1i' "ʿ"% ,nUN" &b Tkr^QIN!xX547q f^&@ .p"l~H"72pi2߭YMQm"Nt縬)GY0P-2;1zxy{+$j@X*)Q#^ \xc r J;r늴WҚl0O)O i>Jv;# n%ME5Ĉ?ဈ ǔF/L-I:[ 3" AP.%kMDVu\/ߤ@uaеgumQYs9~3]<6m }%!ubAzgƬ3h;EZn+Bֻ)};",GK|Ř>!;|֙FxwՆ)ę]Xx;Cc r)}o_;6yU*FghtϬL|&,lam~[u$lJ>wq^cl!~Hs\Oj ٫'}>l}HI|?0 :I4n׹  ||_49>8Sވv8ilfDbOG!) m$-bf*)ɼ7 ~>ͦUi|ͷz^l> |˗7aTh$cAȒD48(g2Um!JZe  xz-=vSWh1 d,P.S"Ǜ:՝vuWP!JvĤS ?oǸUmIJfX^ꝁ[o?ƹ5ضe\2e| M|ݺo`^ô#MN2N9t: Np°?{3X^Ńlӽ8֨%iAjފ.C@~< u1l"%f"S7ozU%W7U眩:1|pg:3APDͅ^#G`[?Cϝm$Ռ^<3W~RL[EB1YU22p"I;]y& t)2{0ddqQ$>wQ|k aÅE id.yKړyֆYn; %t$:'LP1o(ǪvSڠ .Cz6}'/ +?lX$R,R LSכݣ|sVw r{Ptc ~ǠQBa ?M 0:ND, #NXdl127'yN.1{;Щ(fw 4-ÄaS+iyg Q%ta0̤t'}G#ps?W 7s/Ye ֕$f.6F-ɟ]e;_T s=bS23`aϦ-[%fI֌ ^{ ǘ!EKkfcഴ^v,nJ۵NzLZιxw b4OT Qb.^r}k=\4Q/K>g|[&}K>a)˓[~/nr[_P^DZ)y>ڻ{`Pkُq~87qg]<]$g{n_8O4y%ݑ8zڈTfE,%*dTEƁe`@j0A/6X Yb7`_Ev `; ]a#7$MԢd*jѝ,>SgR 6r]Z0ElIk=aX¾0 )^FSAa5'bƏ b+gwsOs-G=2Z1GS BjԔb]Ջ%#.݄xSYDVFnL)>,?6Z[w^q.|K}oTPUK7x4q/}Wo&hƓ}|J֛D\F1%^NZ"X0^4(ŬŘ܌kW NM+ h֢tsb ģ# 1Ҩw@ؠa+!RV>Fr<|k _}C7}x3fY~ZYWlR8&+r`gvS'*1Pe4D1k!TG왏8ytlhj+_$Xl"O]?{Mo* F3; JHe9K4^R] Ȏ&#PY6J*^isU+ׄsgVZ,N BwkQ໥$&Ƞ`A+xKc2:X6&ga |* ,D/sI #`DPҠd L&X97%Ԧ ٽ\zpcO.&bl$`d)w54ca`jjfvJW2gO7\M_67bo4@C:Uۯ~w!"Wq'$Lr»5;kX|JrrˍF\V^; {}8 S56&ZT,{m|p%ETV,"1B DMcu Su)"TfPiT_r5lj_]ԊS51CƂt8j+qVi4VgpL&a N3L`ICrr CFpR7#gu.h<_lsNxR׈8/;q(3 r朢>jm.zcνQYR *IE<4;=Xcl8XGX{$X'GX9M҂ 041YuE+ #8'q$aao2E-{W.h 1XmA-0͒6,|%7š⹃=Xw<2\v &,Umkaqs%\uy¸p:Pd- ]iR6TƳcz_2 nNb& We-I.ߧL:} @6+VX H,68bR[йQ9KuM1jŤ: UۅOKㄨ<y[BN1dbs_!A,f cW(8+CDrBb!_DW#cl y^󶐿'-r[`c՗Y~CS5"Y`!3Cd1B&P(OY஺4Du{bvr< 1Vްق6$;4XͼW}{ZR\h{޾}oe}dAC^FDo ;:0dC(`ߎǝ xm!CP/o$kw'Xepgmk?Qth(q?kLt>@ru1Y7P+Yt\7-.U̡p,$yLq_ԣ4tXgRJI.$8!wq嵾CeME7 ^-+8(_-0c{{GrLi)G\àrm ~R@PU1S:w)o0|?Y~mJ‰)6!́W5 g@~f2ZF+ /3߃~H8g!>aU-cHBah_b_P uA1 &-BDq;;~ r[ %k}0w^6Sӈ>|-pD O:V %  n?3@7pP UJd}pwǿQ$8>t^5İԪN#.޼Ӯjo tlD^1]8ЋܝtlkyyNIӋDgZ#kD~QKĠԥ|>GviqCqqN[=)Cu}c0zQ}]_(<6R*lp"i rhTĶ*'߄i卒R]|uH; 4s ;9 b=0jCV;;? L}R'h˞sʇXf<*WfPZ tJ)Z)NXj|銕I*D*l:?8뵄(y$D w1 d͊O(+s6MD/bGL;FGsrFjfY?GK,Ԡqv^8{3(YK*1-DғD{պ8'FWOE&E+$dE6"Ӣ2« @%BJEX.r^Su﫺g-YjvB.ߺB\Rt3E ^aSzD#iۍDv(3>qqm Jylϑlmպ81O†Arb֛JtP*ToP%̞]qZdoFC_lENW_}oɻ4*]BmƉEJ17W"Uk#eI()Z&+njAEi*:I`WqV4i󀔡1'Yla1F( ^ZN`"0"]h|ڎ\DcL)Iv?nqx&6eBeD]I3X0o:"Ch J`~ 1H, 0+)Vx,710Ƌ/W_6pce Jie,^b`O @˿}cJͰ"ce#CJc@XYITX QD]XG^txjfvJY!;ܕ |iWfs"nvu Xkmi c`k`D! ;.t-vX2WgqRosƜu$2""ث`,0:T eN؀\83 CTt3N%Nh a NwKiI1ozHS$,F1#4Lڠ,3(*KcƃRhH2sPJ_Zb_?Ngl;E) 6`ߦ6_l.?_v`+՝ 8[˙I;PEc5q<6c 3c CE(,@ / j3B +ᰒ?fn%zٳJg rV()ŒoWAg3X&Nm \‰ILoGI[p ʑ8ލf@q +MQf.}ҙJȇ|t?]Vȏ{7KL0j4nzca6M儖Dfq>Yx󦚥C ̨u.ܥ'F_ G"N}$ӿ#s)m3]o,;}"hScS-4q-Gnkli 6` uR;OQ1"IK3QjOֆf24:kalD"~6Bڲ[":BJ&y@3\x%` qIqC 䜔F0Fa9 **mAG HPwEGn pB|Ԛk7SOs, .֚2J~VV5?Z:Zg}J {?@WW?9ZVZgޕ̬eGi!7g]fN|1f} jww@m#sY 8aFLs-u.l_'?}\܌)Ոޑ8 oc!#* s\+DJWu&b,-P-uAJcd@虺̚TclF^KMzHI/z\уaSG9;Dyp@<*݊9NG+gE-2A7 v`fU3Y>N⍯;勒DL8c]f!-"gSWm6?@% / 뿍r;1.lvͦ_=)R׮E0v~InsU<2`\?`+e=В *ȸYN"ٲ/-Z PffwEr(%2Dмڮv` h 0FK,)Aj ?5zQ1,~z\cONm&w32ʭCA9=C}oUX[dO5f=ת\jKq0i1 l1ҁm2S;rWWDu;?6sӂ  yQi~)o3YYv)ԁjgU{8 w'*gէD*ݝ(i,Դvl4Q؊8*045V#f6pVK¹)Ÿq<^,Hgo d ܱ@v\nsHn=II62V?j٦r+AeH2Jz#'ZYFY-2I rsh[sB'\m%]s|B$)CʒCbh)xɷ`b#W!~j)וI0ɶ}Mgon:i%M5__]0^Nʗv{8~j9=txzy N->9VsPSL4/>/ש]/>/{fN8)(q+AV[Y\jEHӋ顓 !:^|Xc3`j,XQE".cuTS(xbL2Y6ylvc$X-y/083HZ:2R#x|0[DD4Fb֚ӗȨ׼.j#uQWr:%kHSAH9g"a;(_!*kc׀6.yF'P>,+KpS͊%]HvȐ:2ƶGFRYA.+W]%ƷI>Y`)Y<^cSTA΄<8l  S¶|4cY٣6!rKPT>3ўA*02 7W40|E@ ߞP%Nx9$.rf`V i6ϭZoʰdؔaisi\$F⹆׉ {MJ3N."a%qu|jQOߣ{̌g-Jz:\=#bE]2Ou/ $y*e(qBix!W+Z8v- n5[jGN=3X2%f0(B4ÂAIX"@SԲ¾_5Em`V\20oAGqQe# Hɠx'!^8GaFXaŚ8\Kr[T%dP} !V̈́& G$!e(Uvp +G uX.q얇i٠Y J6D$8 ^hm8F+qp I@d@쐖-vmp {什sT~7a9߫e5oTp~m UKD6 ZV .9 CD#=Lq :n*u?ćcAqd!C,O=0$Ғ4o~ /ɍ gk7:]9aHrQm.קk_k^µ51_僎avZ!9QhZQ~`]|]~` 5O_ឆ)JHcvuq*FcCgpO-Si PKj:[T#;[]?{Ƒ _ !k{Lvd@X"er)"[dj恃fuw\90=HQ)&THO< @!Yg:y"$}z|nυxfDH*2Ί"Dt4Rp_b[ K)#zxTI]tRQ*RK\%a2Hwt7t 6ѵ1bIoB~ }\ f3cHk5sF9ųUN<[z޿7 '1f%#ᇕF $}MGbb7#qAiXD@HT#&&tv iVz/۝ Z}rf HQ B/}e%[;DR;7msCkքm{ZᑕC74#OViȣ@2ߩq؛{i2rR$\wNCϩ@Jj ۂUoo͋Ig+GO׻]ŋf ^is2-OIz)"82d"*%6Lh U4^Qx{7Ưo-M'w)wvHQ? 8:w ((i$881-aHb RsMDGXKZP-n]EqtJ8#Zܘ jmb $qӽ\VK귕XUEtyqy z퍻۰qEB`LVX3?c:``ց!<Trp-I2FH<I2Y=!1)NX#b(Q$_0v`ڀH1 /ɀ+ 636Q)`xAU2XOݳ;5C0RoxzKڎkt@%I&ASXG.sM L\j`xjvXJ]6mclZ%h!(BuIJc0B!Q{xd 6aG:U,CW:tr-nx-tL1`mH'.2 V?Iad`4+ QbQ.6 $( %Lk`VSq*Z*Ov6Tjxs4ac{]ؙXN|Ǖ ZJi}vg_u&Wo1~X.WugxUBl!JYVJXW#2TڄG^&^mDpZF!yTs5ТUb\י3l) {=fGٝ \`7%e2xMD#U5u(vw" RJ[]Fi{5{Y=崮\qy j[UIyH(&+Yx`M3HG˙Φ22RǼk$~j])q*"֥DqĠQ/LPDVcs1"x>6h16kA3WDj jRՆc=^K(l,&(E"L:"5.,@b/VBp!T`e!H:q]A A*mSYTbK(Mz0<kHAIӹJH`PFZJaVRC Z ip`#F'*1Peɺўe5h39+r9xHڗ'%/=BV5LP'ʪwvʈKͰ+!a-,1P~f2N_aŧ*55cqFuR†L(t%x-c4Uڋ5&sN+fTͷ@E!-iؔAC[FC:ޢ>hh*~zx|FɁw^q7EgAn-ToZlJ RAZբ`褐ToISQȱFB:Bwkv_{tg#'۳jt ɫ[] R(An3.!r2w5=L8닥K|wm|SW ^ Xk3ro2g#XyuEn!yƪ^lfm[EP}ۯ.'bOO3U 2R'cm/Zzڀޒnjz&CKbɺOj1ͩqq3]eBfmy=J ! 0) 'XɟB^i"0Y~l^L*LǸȇEyl~Ykw&{=ђu&+$*έ~x=J(4"]P}A0>?̰±\*.ZpIUi΀g,Oי{D˨XGaTosXq}\$ :1Fm<{0JYd{eWQpCJT|-Y]amS}lF1`@ ktKr/5o&%HJ>1qps!tH)ű1ڠ=^ \񋐿#!ZcJqr9o(rњPH7tK63Mt"4OJowrMw"z~؈7O(L 9f@ J3x_}ٻ޸r#W=sl4;YdKhGni"}R먯sz`KӇWźX5[@i"k9LC -+Ƕtۋ |o~r}Yr1(Tu{xҗI=H:4f)\-bԗΑC]g#g Ua9;Gm;{Excΐ $>}cY ]|YG\!(aNl{}y~LF.o\d8Aפd++AM G Y2҆E\{yIc 8H'iV ͊>D硏j~ :~XW.BWOOc*m5 b}_$Պ'Q{pըP!}'Oqt<٘=>4yNinx8LzxIUd<= SE~$t$8pfyI%\h$S+)Up팕%LY.\ wڽ*D7ٰ %#H9~Bӣ;>0ƌ!@IuIU+^awpFM,^9 <$z.ddgtlQR&[ J},2ڝA1Ѳ9AiE3=!NɄ+n 02,&j-r]ANK?àKglPq?aPLahܡ %)u%TΧ[$Kt39=Np:W!|2) G/ >Ǩ3$ u|iyjiV8VRzxũтIwZZRrɿX4GX g̟_KE] Y3ɟ}"n OI,7 )_͛e\dcf3F/iMz}xd_CJse$G#$#))D'0z 3(`/ 2NjN S Ĝ"G[9{)^xV곯]bl^w:I1oL'Y4gX:`R @Rˁ[H2FG$Ckg@|ޕF(9VK ZjJsa,\R<$ Y@#2)PgB tZof5LO5X9xMtNi,9`I%ϴ B?J{́hNІO Wh@ 5ot*ŽzE^(.xӶ81=,ac MtI`Ap;ڸ Ijz3mu,JS5PFfc7g4ut_O`>0Ɯ!Ψt!Wi,=Klo/HkM /wS!=_g N5X"5t&&hy[’nRc[B-r̙Qx4 vF*0r-%gwcs<ݡҔ|\>8g*;m^!ݫG'i_2CJ;|@_ۂr}bx**2(\2PLm!h _灣7@o4wQse^աPľ%0 cU8'ZwF99ӗ§')#Pnu˥pHANrJ0|"xĉO dFc3mܩΘw;iBғlTgeVV7(&oOwۛ" }-VW%@fhsy9^;`I#S"4.zʈr|MU׿+q׹7~b@DYJX 5V{d/ },EHߔW&jit#=U >j?7x٪28u)i:@ / },}/;m2M 6p[ UHVN, },Eoy8i9uNA=&QJ 77A{%*PA(LL'J#z,=r%*F5NE0R$ɩ?N7@tLߋw45\mЛYvk炞`7xe1j)4ГFF;vBۗ |iR*8 7<ԝ Ȓƿ|Sϙ+"P )$@BAH Z@6x~ m$tŢQ"tg9BD42mvl8J ja|6,T$,MdImQ'y,P" ʢ+3jG Q 7 CkrbAO0;'ga'Lj^UԕPNщd[5: Ƥb%qYĐl.1JX>^]~Ľ؁(>p26nTUʘ3b}ĹhBEE"*#Ϛ#@ά7Õ>nWY0:v\]kF%E݇,9 6(o%X/?ߔ>(~wYCnh/>C^&1g)9N>2sP @ДSơNB'>9I fLo|WR=9y<.z`]Ǯxo^p,)fFU#ZiժhfĘEK|{WQ(474XV^BދXjX 4c ʗ7mb~Q<|E4&%TT٨c4CnRgX:պھgG7MblncU?ֺjZZ2=MʀM61KR <3)jxlPZqЏf W>c I/kLrŕsg`?VýEl'MߚT?8yQJ=%p٣7L\wS'j)JQgF(iFĢ G<\ػ7n$6Iv\|Yi{ۯZiZoݝ\=Q||EȪ~-xQsӜ턢D# gEOz ýs"1+&rZD#~ &C_51GB7 L6 Q !:TJ!%V {* 5yO\+DJkFYkwm!3+jv6hqO=d0n?dST[,٩=̦h1\` VC*zH`E0AT d QtD:kPpU(M;gu,;puQ(4!AE;؃Wzd/櫞@U,i 06Oaǂ9S,< Sd+-7g{WoI0%ͼJq:A?mT0,"҈oY)9v_hckRLiw9ح_/gb8DuXFJ+ خ?mڡ: +47ej4KdW)nOvW8`F'²>ܛEIK[eon?qlE 5&n6W[ 6:BW;[]l/af[`cYG+/. a-w>J3sI͙*qʧ6J)y"yhKO*tT0֡zHY[W NWozIVǙǾLp  h)Ǖ3(Jf.eQ:*h!Xr<"^2c[( (Nyc"#.`4`b5iu94+ð Bm T2 r:>eM9j>kQЭL$.r·ůġ#WwZvd-za3?&6[Y!2گZvJWb6_<=U06d!Q-"L8p6"gSx*7mKJy1(:ư0qՊJYZ(<P)Lu4PTT@K[Z(˷2Z.TldUkM詶$&=bc0|\vl[4嫕ϴ|A&,vB}3+5 n UգR*H,\;^#c|Y_l4 mͣV6o?u:N1f%ɷ>zP Bd;2~BfٱX>.)stNf|Jw J*|*0\N'BZ9δZv .;"['6ƺvLoppElƧ'yp<ڋ. "hm, ;"'=[[ץ{n "QkKiK_F:zNBQv+$"Q7˜ߦ@skJ8|3' 3ԑˑɬrTtRf2ˬ2SZњ[ˆpL0df=D9PA>z4w? u{g (+$X.K@CdŅ0Ya;xdM/AYQGٞγVzЏb6ly ԉ #2]>LS{KKM|4v3#W@&sO ~]&{Gg3l<LfZ S'* 9N~wKf /MI7=^1NZ68{ /Y[I(hcԐghd%}4t*9$Oghxn(Ad@PkS`&&6RN*sOW,hDFVO祿Q_駻 PH'(Y5][ɪcS689F%BH9Ҝ죤X\ ߮K߮PbIP|@4 OLh[n>g_Sils*:סEd2zJ` 3L5 eq5J gń!9ZťΓ=$>+B Ê$x=#?CkT|6 HTVHp+$) "Q 89iO}ޞHST!pYIʸbIӈ+im?-ᐬ,l8][$&+:ppEBy#S*)O|~ 8wR>O]ȝ7Fb_0fIacJDqqN5 j3^Ifk卆 + f*մPT1OsV!fD)e^|aa{(G>Ш O iǃWb,(`BaX)=?dkɛZRKS0rr%Ld'Ѯ%oS J1 k(h8F Sjbr=Hi@Ws U D >N*, >`F ҀE9Ê`Fʼn?? 5FYZ\n|fe|>=)x=lP&RX,{zw6Jk ڞrq|m$~ljHV,H>.~y d04Av+|?ϾzWT߾Lt5˯zE ?p+&vŻmxԥBF{TehAVVNn)vDl p#`\XE/e b~s} Kd&3Ro~Y⹕! IY/oa|ܭwr*Tճh-~3]&t~7r5g큌AƟbMق:VJ^x7x +V QYF xImU15a.Rf2(c5ARuу uT:k*_AJm܇gm `N/o?܋9 }=lv8S,ԙY&|6 *M "U=﫯IC>(3W_*71H!tӛ+%=^s 5G2#GD>"JÂh)2NX9x[% DvAO-r[ϫo2:Ta$/ RQET.e.cf^ i/M}'H31i#[w;.K ;E@ 3+aJE+Xh7^YlW܇zNUet mt{ItDWH".O:餷*$O5A[ e)׮ X^SJ*6Hgi߶}bYŭMNL̚b51m[}.XTʅS+$(tiB . pkQySqi`_if Y" AjDƈJsމkB]n{hE`;0?4oZ|p*O] cfhW.Pk99` j<9BD8jQ X %")]ABGMHŧGeu}_Mw 1]>!yPyHq<1DҪ){B5fu(90)Aӭe]T妼0c(f0!FČ:b r#n.cA4SD-Z;.a)NeaOžT꜈zc=bR}$' i"-{4NN/ODc\Q:f},MKą[& mrs35^YVx淙Lx=&TY_݇;ٓ4mXMŰ[5f $QbbL9$$3DE¹L5ҵqn$*]Qi0e"̩y{f4e6},!`&BYk JPh".T)H~% xg5$F([뇇{LzG -E)yI՞6F 9GC?cĔ%*ӱsDlH.V]\wЊ|VɺhihcFbJI"I~ٜnE c*Ԥ:ɶģgRֳN*|em\;ёIktOJDzjziHS:-#Yk; ^s+knF~qL;/kG G7m5)T uxBXRjB%/H_>饔b;5ܤI}PQqz[ɡ /FPD%caQ͹óoey#BX8ۏÑXgqԄ!FEduIJgpPiԜ%{zLH\jy R# =S3=( 2%PAAP׃E㐇G QMzLjAG^e FY$ wPhO(O/; .@>ےh6$!O85f#F(`1|.j0OM%ujwX7Kj|~KK=&Yk)XX:Qk]~zlZ'pp=4)4/%iqd '%,q\>r)䡟qGav ,Y0S`NԸa•Ni\,pуiZ=24Qԃ4,NߎX(  8K(S%F|Xth4Py‚ń N4"bÄ)ɆL41 t ӎRd#8>U8&=Г)acFiVi bNW:ߢJJzT&YB@Z÷ۻ7ilw?n<~HGnoS/#7@/7iV쎭Kd`BYFp$HgpJXD Ψ: Ӌ!m@hO)ٖYpIҁyBGZgEBx u:.A A,E2` bOK)*xEBF:hRΉU%Btq2%SO*0{`O=Ֆ$iK6hY"{9P)2!Z M2 xWN1< V63&FSUk!0eRlrd9]XL2-*~Kj@ʬn,ngiLy '`h|ר-mn2}v sS6?]4էm3}ֿF87s7/W96rFx샋f6fv1&uobZMaF{6-GjPܨ_?hNƽJ+릤Uqi͌RK> D2xZjtpʫRisƌ*g_?~ g__6B%5MV [q,$i ~&3V7 kor0>W(nz?x\Fflwu_BsEqc3<Ƶ T{a7X+L RjX}ep6Z*՞BBbD8{1V|2%?޴{t|&q}=NgQBxfaO{-=VOe) _E[}<-0Oe[PSEv95Z8xF6Sq O(#);\ ?2e܊ZŠҍyٶoPy8[bh# =daU;6 mĤ?=U U4Kkx# [.)6>Ģ-{֭ U4GwJ$'nN7X}Fr顭[Dև|*S~?cFr1HRon'<#aY[܁&[4W:%\η#SW=6#p֬Kx'ڇ͝r:8 rMs7Uup˷_3=@3#ƴ&3FJWw߾D'%1{v =1xyR8Ld.u_D"qͱ{~)=1xyii|J ^*SRU=?ڳLd>-sO顏k&I ||RS{:8*NQe@ a&,"td =1x8yAZnЕeT,Hfڊ sXX)#̧ph4q I/࢑k5]O{yqk›p]ѡäV\#UJ(&PvQ ˆmS@MDN'pG{R/"~^.c~3k5?_^yw\~a+c:z%~+~lfEDc|<|1r&,W;oWo&럠SIb1H`#j%!*zyRJ*$X`Hdf,an?X{NƋ=kGr,`?oX{<O^xlBu_.wHT& |zZ7A|@lV63\sє\ivfelrm)?6S2zyyt#^^j&i &\ QiPRRM~qtmUqV|cJ\>̎цE$;3^~4ψlmih`m11gBk[1,LFV%*l^NT"*a%ϛ)mE8HLkW`|>}6-FqԊ?AEV#Xՙѱ"a烾>O899{+~pg:S*8]+].DqHJN+,2RdT Oͽ(?j}m+us ޯ&?]Ħ7WW],/% pé\OVf 1vt> Kuq6_Gpv(7s0+W5 }mJDwa܎?Ύ} Ǿ +rv+\+L]M0dK!ʔ>;.0ƈ!\ࡃf=#DMҺMjB<CGBm+.#rnX NA!C\¦Ky}dQq|$Q9 !Ɏ>>%}9FPQŅ*bsbyӺ+F_/[0 EzhFm(uc-O>)9^H663ʒsg-(LQQ}xۑ4y*~䜃({Va{Zs73qrH;LiWkPH4jӨa"uXptzxob]3E|qtۦc-d#Kqq$քR:[~,Fե(޻<~1AX4RZ'5L0vk I$m.Fg<pzEXϽ J+v,I/GJ n߂ǹx 퍼NJe{٭]{I^c3qMc|vvJgBiyv΃ͪU9@$5M;[lVJ{-Mi/#?2ƢM>aDs*Mo]Y/Gf.6a_mR?t7SyJ~-Ui(Nl !xa%?3|Uܟi`e?'zagw븿eKۧijMmI|۪!W `iLOiM0y<6lrEq'Urh@2 o>=;(R8Q&M=OF=Z4PcɌ#\^HEgI؄Ldj5~a9½ a @hfKqC'ֹxv [h\ʐ`e9lKBZR+bNݚ#zhaTcvu'kFf+IMHD\!3Pdp kE6,œRki+AT-C ?`-&,!EQ>%Sv 솵AA?$3*X)gZѾVC|JG'fK`7 v!Vsa5ZѾVC)˘"p%H*Az+AS?):tYxO X4ӽ8I dCDԍq94yt}4mI5꤅~|¿|c3lǧvț\AX'&O,NOk6e.ٴvGMwS̏`3_o}mRUG;F-^(Bbl9;>Qkcbz߄zg_K5 `j\ak*1)޷h?Y.2V{xxeC/;}m?0bB|SNF3í2lb+eH(qP6k,EZ}̭csю9^jؚdRhJJ$K$8NֵS\^E(Jj:蚐NxɌV+)'j5iՔ^*@3zu&.M&%e7]P  æH+f%韄y1 -6ZM 8<90X""{y!F5 1 CR@m[~"Ҏ:O=XJ*2ZG:SA;^ Ij2(d~w89{"Bj$#SAy`,}TkIrV YZac<TeR[j>H>eg E`u  RyYQ$}JRXan {M>91.z6)Ɏ0֏9&5D^QP$Ddk B+K>xBx8!#+-x&m$6gAsЁ<&#DQL'L"8&Gjr۶(-:'fb4g$ 3%MRIFz8&IҒL6 6@͵0'ј6-7xD</Yd>mkB]ƶMD{9w\Pb #ѤӁ<Y倈Ln'uX8$ ;IX#7hTS$Yf ؔ%S&ČZeDqj cRPo22߻3 %<䙻(ʧ\))InXqCحNh_v+!EQ>eON vK`7 v!V4TnhE[ y.)#B$at}\;vwb*"V/g(Rt7i&Xamuc az-FnhE[ y.)E~nf솵A1eQSu65n%<䙻(ʧ]|f QNk>c˶j1ZѾVC|Jg[uEvڠ`n2[L|V/g.>d7 zrAV-{ hov+!E1>E)_|E)]Hs7 zj tVʣכhs]SbŒCf]D6/R%c V bک9<hl]SZaxnEA/A1ݪU*Њf]S/ݸ]ƋHzN9ƠOUK.S_<-!EQ>%;ؙbmuc a]($3V/g(n3v69a7A"ݰ6:1[n5RP:_ZѾVC>)ӧ )HMEg~鉶چn[:\]^R|MsnH.;t)z2H*x~s6m:䞍.Ef%Y:rp4{p*ig n ]BXthgAzs.k S7~ef %=VF4mjUPzl9Xp8t0gt+F۪(hc8Tqܥ9BD[#ʹ9U&]vMp84A1gE熄x7Tn"Q#!& F$hmB*΍9?RR}i};Xyd4 BRSc)xQ8R%AĔ sw 2g({G^)0tQ0R 1jiM$U Ve)2wM:|~kǫ>6gt +B6^&&p=!aʻF+&HOTn灈7oݤc5ZFFNI~$9Fe4F3@tz|VH((&0:P.klddlb$1lD#G5aaDr$$kqU4.[Utş)lx<gd.oǁc̲{PJHδ:IX}0 i%'(X8s~ƗB_:[ ffD$#a=(8UhWL!5Kn{q,.̫@Y± A{f;JV:{XtћFex*5>'DkTVxRǎ±{xyyI[tg⓵8$ AL]T OmJH~vkBhn.M8F&Bp7K365²1,E@VIj\eXL-T;n7Q&޶dd6A$k;*BMLU軕d?_~T wB*]?BZKWmr.i%`X}`ze X:E,fg.)ibe1ɶGV2X.P&Og#ʅtqACx`Y q >3q 2D.nMp}|6 'U8DQS}wR&jlRBR:;wQ X ]9x.g@*{"m ɍ15FXB=KA9}{@q}Fw6F\ЎfS^wWSh,_$ MhkIʔȥЌ<)L9_9Aꞑ > m%.np*PޮmK97\x]4I1'r 41i2 g YN*6EW-SffS0\,Ɩ@)a6Dedѣr22+`a3hd9+9KLGONśQ3`(誗6.&>}Z:{ӥR0%[NKۃ#_+!*DմnUVnȋTiRP*:v~EKqϠ^L)&䡕YbTȨ\4NVLEh5*WnRi b(h7Ok5SAtTV`gIUfL^3ԪlV$] tG(,*@zF:%^/FKF[!d}ͨGQY!y3IKV3} ),A(SZ-YZ;D.%baar{|aYȲ[7Hfd]3Uwߟ+d jIcM8QcB= #"$pFbh}裗ٯrVN3_+Y@tEeKfOjť,FWTt![~K][tE^4jTYׇQ6am~\vʧWU_n^=ۺrM}? XM9֟x=5ta &l//%H}3zy1:>u u"G,w+~|pVfujvtHKۘ,ŗVO #!DѸ̬$^6_G6CxJV|J8oUxXwm<_G_v:iv:N/!6GFp ~"Xus%KkYȉ*&=V,]G"Z`V&~Qkj]Wwh0psux:?{Ƒ6؈gp:F'F?"$HfERҐid"3Uu=~~Mc.zQV-y/bN=z;.HwE+NtI̠A x$wɗȽ(SũYa;bX+01 Vj?4N446Q!x?hVMc4K3J^]tª g(F[@R@yHL1, YEg=B&dsh(3#0DJk<}tMnhM| кdREBʂ2ZdJJ ky  RrBC\~8st0r᷅'|y{oq T+ (~|s>6n׷Ǻ!^0yīN~R5s.O}zѿ/?.a7gg䩯ȠoַGN $I 2VҲo vB)[}C!,70],UyXy$ K1G\o:?޵-;Bٕ:@=#^X,M_}fVhV`zz>{yG)$e2&]zQCUF M88Sժ8c4kw>tKpcdإryw󩐎,ߴ1PDn ~kN%=\M.em3Uv]|}-cgAٵƿ:Нn[2wp`nB4;>DhpkBNQV/2 ŗonHV 77t5L% d;9nf2Ǖw_==8=ȈߎO~n nRڄ?ʹwxwl0RIe!lbj8_G שZo9ىIy+04l_o|lŁI[-;w&ȖJDxۨ>|lۨ'V1!cz ΡNW!TKri>pm]9e徉BZ%]}vh9s:=t( ~~W)\E> quN`rZȞbMߟe#nGVIiD"R!!F*!YPH90-3 PYJA:rE j-ыj°L ]53%Z/UA(,Q/hr"XQAf$eH 4 qp'"5"8|M^5*'A:ېDZaA^4FHZS9ϡZQèN]J!ٻhshFrnU[ixq"]\H,WP-o66rO !,YC8EQHc )i]Z|81\J%QE2NH<-dp, T*7˘LeE_I+Ts֍jNN'\nD]>w`^M-zϖзߧgg%hyM ƶ]@Z±T B3 %ɤ7r2|X=>/ [zA-෣ߦx5AΝ7z|kӌ^]__~{s9KꋜGBewȞ"Og/]=_w~orG9Kr8Z_c%UzS`8D9F.@Ii ˌ.K CQdUTB{֢,(g%&BI W6|b$ PѩCI >p}FVÓ `XAG5=F=HFdmDvF&})#g"G, ZV75N>P\qqMIKJFrT6t7fm.kމYE*{5teKt%)>hOMlp0`b%H E$#K`8'ŒMN4.GKiF )^+F ӊAY+\aI&˙LC)ҬcP7R%bjӑj3>-]Kh- 1>R/bׯ`YnbrĞ+E+ 3\޴כ$5}VUz2Ժ|_..~>ݯO#ѫK~<ԔDVh/V`RCxr3K0e>Feˋ˾yRP;=nO*`RߴzjՊБF/ᨤyITU<2]5IՉ}V.l-Esx;1*ϝEj,Zc#kevb#إ2+eרՈJCkVE_FhU)˘OW[nz^.AdɕRһk-"VGZG G>3Dɦ #I8n:#s}A`S9q-gt=hYgჾ\h3mlHfBTcP)AY`6( s  `a,dgSBz-&!P! +c,-C}9gpF9^O>( hQǽ|QK˘zTY*1xl"XJdtRtU`}.GTk`L lo$QeC?/~nFֳB$!+=ue"Rwk$ _:'&\+wuk$<$@Cz4h$|' k$');IVr=ӧ>}@5_rcB&fuטIm_vvHy #H3mR<я`\1%~:ҫHf(kˀpOSjʄfF /rz:{Wn9۲hUCP? U4W3j%$hl:%퐫+wSM/W0 qTg  %ZbT[!ŠKwiշ?Tx GݪتohL ء~L˳k: {G6{Evf6asz EJ8xyU)=jZ N;h=F=an3:jMHW.+2eYV,w[z~r'mǪcVR%HX~U5/ 9aN]}MK-@JtjAH鳖n5d}m59C y/l]rOa,/$\ *J>2 Qé6T@J䜬7"2tQMD,ʹ1FmGͅ/V`mX>p9TBD&<*wH_尷g Y|i3aqA&`7cudv[zZd&ŪzcX%N)V9Ն#ӳPG4:A'{"3&)I&%TKNfp!"aGIYF3v P ,(eHUF𜵠 :89c9EAW@aSzEf =ִZ))X.7qG Z\gy.!ˊ~ȴ6(-K^8ZQTDŬށ ,Œj!)S-+sxukiRH]D*7 [xTO<}R~R׍m+T-Q-xҽRҴRZUT+vs-Mucۆ}8N Ƒnzc/C\rQjU8e:N"5lA"k M.E 6;) 5IBkF"B^T:%ҹE*+3b$*# 3-tp!HiՂQ-\ `iwF5CzLded T&F%VtTa^0TaT/ttNxǏו'vX_lj-k-HR(  MKgT^AKQKӪUWi 4Z5DlY D_R>x}Rq’Sue$L2՚NR칖rQ &Rm"BcUha@eRLθq1 2-JEӁ/33ŒJND҆.M0O>B8 -|tۅzLYmkẁs$gOKL dK%N="N+8-o47Dϯ ?_~.ǶlbQ\,D"pBi5"v0oV"SOX4񷓯U%HY(墇XU3>lFwx 1vPp~/֢'&LnW4atTEQea(dX86 RfȴYT[[PI= Ś&"I4QZ@R @r~I9Iӆg쿰(V",n [@{{$ڒƫ4Pm}8XVU\E]eT$1zLJ弮 TW q/:"H%c΅TCLӗ;qB^\UeSY]?bJA(gJ3SLc><EQ9-u.TV@neE1VVm; LxCkJơ ZR*Wx/,t5,0 3 pY`-"-"ff-Ҳ;϶U9_F?b;qo;3ͯ:gN|"v\_pW?}Y 8nLHsˡ2b[n.)* ~Xou}y$}vs3rC̐6Bc9hˌ^9?f#+1Ls^a#.b{I"ZqX݃]$lLZDXeJB6UO[[+\[6( M.ɀ+=k_Il S-7 nykD$]353y#FX`X`\)rsrRi*>׹ ?Y/lcAqo[3sy P%P9 4΀^oKJ*^sm}t]4[4 _g(6*d>W1$<&gWa㣁;<hΫn~|ry{lBN]0t# ZHŜ`xgce{q"9p֣Toq5JfNj"BNFsXYUhc9kLhM*eӪ@<鰘I9fd2 =)_"/u^xtf-lȑuYR%ϒgsUd^;Z2R,S e Y{%34dY@mJ,.jR^s-J'Wƶ V)2|;r&_Y<}VT!ίY/lt*b Ɵ?ɒ|@OB͏L0njWj1G|D&\ͭ B7Ͽ=>#0f’ Uѕ~77S/O0;٘4JjGn~,rr?>@Y}M]wO9##S LT=$ :0nlȸ` (uQ ĐI$q[(VRQXlGPʲo3<4QI9|# #H!`gk6ךn/YrVcE}oV6lԀ\^`:O e8hЖ, dW#d: 61ܜ•xTFQ4 y+ I*zr?9B r+0]wSEwjľ=;%%v+t^bʼQ\ʣG7]M/v)Yo Xw[c _EM\N?/ne LJ\ޞiJR^()jtȪť:dKrqQqT$nwSv.ӻw(3L>:Kw5w+&ӧ!7>mqml+읝X.o6f=2`XZt61$sԇhB[tdmGݘP)Rժ|8veb1@5TZ̩Eb͈o]սm1fJ!론?eȸF_( #ze-GZ2~\d :?z'ӎ?ւ͆UT]yrnXV77&~-prcpݬqqĉ(=]gyٜv״0gW=}zry3|Êk4_]BZp˷-.Fɭw)'%>qS K36n-$q֗1$~^Rh(y&05&H;0 m|ۗ=_&--k~oMUJ]O7ե@K 䜴vݯWF|Q!Ak76Kmsޜj;+!WWHO;'7F{ ["iN5Fdg+TJՆŸ2F_LnGb(S$T-a@&w>+5v]ts~p)?G5?A,.1aR~3Nm/?h~:J2YOҥұƧ?z yqAdiEHGUNU/OX7+Pq;nmy:mQǺ0"V@GmlLn]hWV:LjsպID֖)u..`#iOnLn]hW%҉:Du6hyΌVlsv=*uٔۿ0յ&m)w?O9%h3^POA5% gM~ Iˆ港^;|[ Q{z>dp fpo*qYw d&!lV(Q6CK9K[gc!A[DF~-)ڴT돧y´r 9X`+^hf@ѿ 7nluԲޒZ[kV c  Pk$#`ErZҊ9[Mƌ zP*xmVxyFaFTșt#4.6Z)h(bP䙱YqZ*XƮyXð8`$be2&0x% T{vޭλ T=rޝM@16FoNX,CnW:GaEs+L'icAZ%~-1WRRC p+x V\MyhѤRGH5c[YrVƘ1[9" &iRBZ%sSEL.TTTĦYہrȾ+#߂e!1}eS;c+Xe_k^-}-Ĩ "}Uwt{/CXNnqx  voRa6M+`doT,d9(6k= ڨ)-moFsO1w)'A$Jmj !w2hK0Ům蛶mk2mcJ]\k22ֈroʤ}Ml#(c ˴4p7TͩRP];_ߖ)>/o~}{8 y*J'YnJ Eu>uat@27־e$Ѻu!o\E[TeP#U*UZE1>׏mawd{H_yс}֟|w^˞4{G`E۲7{|;SW,fUȫ˚੫Qh8:!/Gnx "VRH-a C)ʙ*D1ov֕efD陡]}f?D[f߲r܄$]:- /Y|srf~SZnY8@˕m_ٸ[*em>86vީSlL//gM4zl y-')Ԛy'iݹ7E\15L0\eTZ DuQDF+8&x& LA;Z^UJj̓:-JmrZnGl>P E&J"JZ(2cj(K͝3U#JpER\Q/=.Ͷ,QXJ+NHJ C)+Ka7J9&bcJyYmBaDYW|(2Q P *Pj5㠠MZ0p9(LIs!))1;k)d)drN2f7o+TL ԃs~ە{/oO]Ny7rp#D&cٳfE9Tg7"Dotއ ?1kDi>\q-yGc><%T(cBy>M)B"&6x3Q5VB޳"r梜tgs/~w40U>Eth/󻏏Z6Χ)%BG=cFV6Jfta6TcW BYhvaIA?uqg)Ymu  =o s3lJΚ3IpdbiJr!{Ӝ#!5e kejuP*N-dCLsK-FJn'YM <5 ]5y(cঊuU?k6WPb_߰}^|7B[7>__0Ĕi53/.KY,Wk{0;oJʤ>8SK/&dM<6C"%>e;f=}Z?>|>ly;yJEd}zv~FZV#v#vllc)uK;؆u뵰jz-Cמ_}ӈ_jVZɺkܛ7+܊h^-*OFZQSZmpN{M%f4(HFn|~([S^ARldMVa@5u6}P9nMD4Z T #o2\ڶm5짉 g}c%r_-|PBܭXtz5vﳻ)~\oWa#uj;7_O.?}s+.klʸz-o7.a[?#"""ȈW)3!>H`׻J5V-V!Љ}G_tl* t˂[y'8w#[B:ޭ]v+[ym)FYNE}*ՑWjr 7Xl~]^]VDRI%V z? pit۾q+BuYwzghndO6(uR.lF0wNwuEd-i~(ţz3ۮkaˇ\Ve.0ͦ- PBdF Hlt6)H &[k*k}vfs->.ͶU-5QWoVOJw#raId&.2}D̒(hx-5!wbϫ:3.( qFaR;ԕzǻqBbuwnE_ަ؝wD nm C4 S>n'buwn}"y@B8DcSE|^ZCȸ*xF@Jվ[#[,omΟR?\@1CҜ p*QŠ nGEyiC0gp1pqXdnǞ[~}Rײ/):偨g8R %e<Hl}M bBxˉP=ca7`FSwdPZ`YMמc( @Wa*l] vr};x.8NJ!AaLka9b <n~-5pA_ .C}]Y=|?AE%*aXN1Xܟ{On:O~ :0]&21[{V3C"hR 2\娊gΖՅ)A+{^>!%]'哅=E3mpy%HQ؈-Ƅ)+6%Y3Ŝ=.se "p2eHNT^H~lQP@H2E=I8MB8u^pZfap H ˅5=(2'3L:̹e("e(FLB-lϔ.H+C[׎pV ^6Ld fa-xvkb0P(ʗ`;ȍ7.>:ȹ)xpiCˋ"?Z@d(#e(GL{EK2K@0-+%.̨̇~9p 32Z%&IuC+ݟfWthIED[Y6vfz@D;$7>upF+9Z@ejk g5!C-T7_YGWFŒx.QW*76/ԛ6_9ʻ^kEe✅( (I- Q͔5W=wM3bW]hwdbzڡ>Y"-TsRi}7֖>iP\Bh1-;&ՖOV$^[q)B[QS[RPbyбΜB~q8!%LnX[d ioEJZT'O5yJ5ʸV f(p3̭. :[Gέn!ITku%YnXI :n2N3xZnuّޭ r`JAĹݪ;n2N3x.\;1Ez&!ʑZ-ykZz>9Tg܌iQ\H/ֿT8gY:zS[[#׿&!)JGt()ʻ݃G]b!J}_#5&wKA tRQǻuEHwJ]ݚ@+ޘzVYR lu _% >O?zmgcΑвF35ȥPL.O5VZ>/]Wt(w?ž .%N=!VuJ9e%ގ qvr>A0]<.ޜ5uvxy? g04|\\1w<%E0_^7B?)Uٌ\dko7-cOd|L8̍Oo)|]pKW>b.›7{hRgo'*>fw %: !~mƃ,_Q7 w4deZ0-mM,}/%m\ŗك6(TM'9jȜ ̀]> bql;NBB>1;au0|}w:[};Kg},iͷ3<]Qݗןb_w%f3"QNW Y[7=>"BgԴc27֍nt>^! ND98+ aaO'q"#.# 9 #!r [pbK2YVHD㩾\ZsyrGՕ׍4-t/PdJKz,cD)DJGD)D%K-^O(=F"ġۃ(}Pz(e$ky]3Jrj#x'#JC)Ah/P՗kTk8mw7J9C)gW(,%/'5JAơ_Q CiI5"Pz(ȸ˸T<.]irBqG\וqe܍G]P-8;7JC)9T_R-JQoq(*%Br#G)8Bd/P :%ՈQ Ġ4\7ϞxP<՗T NңFi\I".iA@rq,'"T/P==P-).=nF55)+JZn=Po}?nb䊏Z1r~)9QSnDg[Ï?jp|]8͎pQ+xnocggûm5298Xj$UDwE<៙Y~HcJg61Y): C婸w޽/n60Ǜ/esΘؠZz*%)E.R *ӃX"Ji jJ-z:|nIJoO)j=WΫIPFp^(vXO(@ {؞ɷV8<)dGnWZb'sӵkRyw0Aff''3~Rd =%,Y֙КWlA\x.ކSO5~uvAٔ6-;->?Q@#>-3ߟ]3em/}nr]| @ټ5Sꢹri]mRs !C(-Vbry+g;ᡏ2)᾽ ^mRԶدS/.Ns+~S@/CcN]\8 ny $r RQ?=u?HLG'؜xuCY5<զ5@m+j!'jR=Z lX=ݠhtN`ꒌ27Ko?7h$t$ uAj'V% c*V 5C*0!\gpH:QAyr'=.7讆""X/H.Z.?ʖf6˅b h@%̋L0i\ -\/꜃"95HQbͫ{Չ3y|VCQoj'Z/)"fvM`򕇓o< ==D}Uiqb`[ .=Y%XWsZR5٪ZWSu&աuքڪWd>O*HT~UTš>8T Oe,e#C~.Ҁ. ˈ+FZLiɝBF45Ge3 9"]GRiYW՗T mmm$x/P7jN"/~)PZR-(=)njR9I' )NKEܦ9Ռrut(u2JTPwfImA,X(T8]G4$ҩqVĠ ny#XFԊ*}1iƺ2zQw+ۉ2r̜XA~hP}ulv* lTPi ?SK ;r-bYyBUK]\ظjܗ;g~=SQ/D->x~|hӰrTGq擙@\l,?[*!?5u3MpOE˞7-D|JPGIWE^Rmg:lxrLh,h-p9C (0CSrCȶZ"N܁Qsv?~/-N\?_ L V_>;t_e6MJq"B׃/oT `l-y3䈠&!])DB H|OoƛRD_02op][o$q+$>dx)!r`8H|`eE_g9űag$\D66>Jj>,WrMDؚni/.VͿ=\wf9lz79"ϟm~Sy~ rj{t9 WclLFfх?ٗu0~tˊ]?ݳ 7}\~y=q~yyF̈Vm9Y XH.B<駰4F|#E0"KTT>aR{Um* ʭoC!ʭ>FɌHŌ LA: y^ 2Hˤo>9oBƹs_V)>Vrܲ΁|j7>N-<|Y]}}|?s]}1_Muŕ3N~z(hNtv?.*=_n3/֗8S7yJI3Twy#úYb ӴmAW<9xs| #,3Ȑ'3F{olZzGX>@~$KQ~f$j9&y9K2Jkk0Q bYZlsO2 92uy2bV\OApip !P2<}b*YE^\_֔ qZIOxlxHI|1Ke|[1^6I.Y))q94ȥ\ߕp3? ^@SQC(hRMsqIM݈%n1p 1ʹsoE_`\sesPKt`m-('&սeKQ,TՌD@ʑl!Hw;$kj'ÝRkr+eũmB*PʺA+Jd;jj ${j,gG/J URH d+ PyPTV"gX䐗5@BQQe8˩%SS7ԡMwl4XU©*um*u vK@](e u.H2J5@5 ` cGV>Ȕ2L[,GF~!PMYE(:L (X&yq h iy܀ȷW0aj}0hڷ !pKnK 2 ^I=b11`G 0$f:F@.T#ݹݓB`*sSG]2# aH^GDiNPcB4@M[7 vRZ!\^|ZajFNV0?9Un*s Ua+զ4ቪ#EA2aU`+ TQ-KӓCyGY Kۧ(q3R3_i{0h49Tihʑqs)q* o:7p:SC prkvlRK䭮hQR'gK`5F^MDf1ıu$ ɗazKUx#iaM m~,ƪjs"g.z.ww.?Ӈ}}]ClZ}]~}|7/dN2]>O|#3^Z˫ UM6k">[ߺ&zݕ`4lADto.st!18pI $$1!8H6Q"5;ܮ .'@gů#/-y ne~0RآJ]ZV iUYa 'g Ţ^Lb. dq2ȥ^+.JDg!BؒZ}Vh{:=1!RB-pCv 7GsW\-[l/=߫r`[mR)|[5nnAd7N//;g <.?y\~KmA4 .r˄ͭUME%Y+Tj-2LTy΅ejg? \_VzWsyk;AWEHl59\{l)*{YsގR!g'ݶW׌XJeQJ$ ln.*a5m H,vZ1Hu%jK lLb`fV Esp B W$TmDΊ ?f) θ-*PՑ`|ZIb2( dTHް< -,ug!8 *Is*r B uhQtO猗E%Pn‡Y؁&i3 F|E.fn\aqMХU&wUZR ;sKQF(!- [1$wyuengN 7h0ih.*+Gjˈ18p(VvfJfδ0p puc 0͔q/v^ ׹p:Vᣫşz1kARaLtbr~PDpQoNÎO+Sltķ3 /Qْزf@Z5NGpNᰅjC4±2Bt{`:ӓ[qCFZT0H;ǠӍu:9n*16V!@;i4r MfDsov"Zg'=WW!p4a Ud˧HyW*YԴ`|!sv !mzȚkDf# r t:JJgVl96%]zHI7B9DS0eZ:ݐ3zT@'3bی/p3Yw!)mM&8wj uψn"Lsn2|=Ի wT TF+xtaCްNxe\& 7Mg:0:X%hOA5@}|,clZ\g x .Ɲ 8%S;.=~N & 㣇 өO0>fZkdp!! "OD*aUhw!L00u_ػI`Ú!bzu7EDES<ҏ f>wc]~uSBz%bO. g ~=BӫH ~8޿߶714}RbR vFIT0JET06R+tF T8?$Pz\Ԓ@Qz(U(U9p1H_~AM^w%r"8e$VT6'Cc)f7ɑ&iULhCc3K}7QsΓa!q5stTŤC[&,|]aɉłesD^K#K]}#]*B^=zH'H(,ڟqR~ +2с8mC)RlN)g[E:kzԼeDn_V]#5qҜW@kP})m8 0F;\X!;D;LWK.! agH,ysЪ9`J3% ,IV Ah"] )[))2}dBjSAΛ1sCQ%/+pL ʲyMJsΠ\IJRXkHCbU@#s1RɪLoeG[Y^kmZKq>je_}_47?X2aD+VR 6Z&'@[ L[:eA$2Fu55&ɁgiuU]Tb'.*2dL+桕 PF{cIJ˲̵TƫJѬ@KHJ4ihu_K1`Io)}[ʬc2+^EiRF2p˥yFa2OftPd"@h)F^ b(;;@QJOߖ'jZ){ &9%EPk rЙ$QÓ{GLy[Ѫ W_~jMI3ŕdL5Vo~^gEG*MݠC߹0!hTrpT+rT#OqTQ*&'ζN¡v 9ݍaΘR(3ẗsQG Csry al5͐^ _RmZhSX_AOԀ/ZZ_o܀oV/ ,*4jswa1-M&]:S.#FF $;*EdKD$*BQ*MJiPǤJ3JOҙ)+x_6JA]d0'JAZk(=EȦơTET06RAF)RJ}E]59QX9%R_wVUQ1 "PzLQ{R*6R+4>RAa(ܙr(Fj/lv\&ر} dz#Vxzb=b|[I(%3/p{$hϽ %k}*[*B#hCT{ԔM ?M7~b>~qlmcQNp;!;|Q:7@N~%\Pxbs@PJ K ,ۄsA!.O#C*OJA>D{Ŕ鏪Qk:f' 7DǠlTM|tW+ %Q,M9qTa:gOD;u%)3Nj' 9CDLG#-fAc_K FFǦbbĪoٸEw2M8_0Aɞu/D& t97EzBΔ8ERiwۻ/)ֆ"5TZ1#PYKlE x&J ΅Bi\%8/nĩTy!DJ%-a`?ԊȜBb BZ]Au P3绤Tr]\e$~{$P4x?T:V[Ԛ6S*4ӾU--%J]y "[s!X{=+bo=i|n@_ܣ1K5SI$ ?ad((GZz *q#rh 0]W.g*׻5ɔ}Jh֤k9!?4y2EɄ`|p1B:TirK$A&yrf -\h8>½>4ԤDl"hzS14izx)46GXd6AXQHvOD{C VҫI# yަJd͊M紱O4?~r@'DWKNъܬq0!+r;;?B*p2GyKKVFI$DjpBtgd!Ӄ݌N$ Gʌ b/2cyE;ʑu]OW툫?ap_?=Ċ-ԍnoq Uom(*籐k.FW{򗯯?(q@ۻGG3DL:?ܳe^,[ byP4 r}L4/vp¤ 6nހrQ-)I˔&@8av+AJ~GNv;a0mqa%f."䍇hILI`/7bt[RB;r PSv+?Sݺ7򞐄Z<7s=8&7տշn"d|rpC8&Ls O׏k#n@F뀇cm{r131 >-4N n|P?%(C'dAT>$M} rN=|`)nn=8gYm\}gxg!ײƾIX}nc4vq?L>yQ. gy澛-O Vvws{5ذk NU:cO4 # &r|(DUPNv| ^g6|VރSR5EzGrH.!|8 ~؀ Ȯb}Nm's\_9r `/ Lĉ2u `]aQrt w|](]]^nzF]%"Lf3&M2yCVrk#%*[{N9pԜYjj++!J-ˤӔW6(bF tHV$:_?J\nQY5Wȸ!Np(4ҴAډ)jD!l+%6R(V rMܚ><AGz1|6gO%. Fxqj.Gݛˇ^_п}ߺ/ܯ=qVQn'=o<3Lπqtma[}- 'ñ?%$,ftZ 5%SCVW|PD/zub$z@v:_f9Լ"ͻ/ǭn4{i99KGW28N̖C);YFk4cP!@S^TCflRa!(iل!w[￾ ۮAY&x9bs lq#XO%wlsf>⠒ :LKz'Jx]=_P&$%wboռZbѭ:FgmH;?o F~o{8|x-<7[]FhjBԦ'Wd-0!(e~ؿ|y* nhֱ`X06 J#*\͹50rik+j<:Ąq;lhOOt#vtfG-U| [ufˈmO%Heӏ6oc|8 Ͷ=9~Z b?loub6%sLF-n0S''R:SpZoFc>jP(Rݯ闇}z& *Rg!Nق0hDFɱIvIo/~bRoXlRJ!|e@\pA>Rzy Ti|Ta5c+όAe5΀UJ )},=~w"C[ȅхG)s;ZJ"X)BJm(4Xu7|*ⅫET""24J0R\V&TR:BP*]=DQㄫ"ba>dr&Oi%Zr)0%)I#۟GTr Zq IO8 F煚]E nP& qXPD!yޏ|CwSp~B1hiJ>ӔLSg2@@`o,*XOZ|TC|\̿^mYmmsC(U[l'GmGEr,4"̬Z5fR=L C¹%O֗ocvyct7A9>r" 縝G_XMV7Fuc嫻FDKp~&~Ov3:(称yN :udRu݋*}E { ®7c3]ȎD/F.o1)8ۆG["S66Q_ۻZ ^  /gy[9"F9G:jI40ER=~G")4HZ^r&*v!o@CʭGB$k4!JzAѢRx^zq{% %$+=&$SJ _)TX+n9az uLP)CqE?4nПCp{b6\^.(LS1ޑdg{cl/r4* nNOymш).`ӭ#Bo$@'^-N@Em(eu$zTV#UՌY ϔԐ08oVT)AԦ j*FY6e;R u-T1,4+k ZT ERc'ؙN^jnklHRy |r9Y%Ⱦ'&EAv:Y7/Ҁd]! GE)\ϩYX.JqpK&0 D\?GBw~,$<Ǜw-qQݻv]Ճ3Ͽ{VaW=]4`eϞ=cD)q/߽;Au9 N5+]\}}]n[%m[R,mX-@yrG'`us蒰1a3LL5A(e"-JV&1%J1J9IRNSn[^ryGH\x s)8f}V_u DGHTKua J[}ݶ3/QzQJuZRJbQ|#(=(\ 1+Q(sj/RK"QԥްZ z3RLKqQQ z ҥՌ;JR\եDQDiZ]d&NyG)д(菏$JY}ݶ8(M;Q*"J9K҅ՒK]zQ*0-J.:Q"JY}ݶ9J{R4>7Lk"vh+|Gp+YkE{>B_ثn~ef>Zh>٩ i@!lDѼ%/E4=d!Q`oMd@-ȼxmP|YR|FݞL@v^foˣ8 &ZKrHbDnoz ,:GP@BK!4ZLprP[n;TS% A*p@D9t=u $D+v\QKKFHǢrIX ׺'_S( 0iX|{]B@.dC^gTRH4\[h)S) `Dz{grA"c9mqb}\xNϷ,=oz=5LtMҸ(܄K Llꋮ)otc#u|CxE$u?& -ř ?L8;sH#:jCo;uj@YQGVr-P*$!&z#qȨvbHq!bWQ@s٫Ó R8!WRe|)a֡q^j}m"jha.'XzUY܄A5֠Cv*!a`Qc)D/nS][o[Ir+_ͦweF vfv_ *Ѧ( IY&)EQ!%yOWuohv\A-V-L))ˀ$Rx >IdJmɢV| 8P/`fOP ;6ƍGN[w+49ryƭ^ܹ3-4}\iN$GA"K$G yjCzowj0u̧۪s?ōo" U8d8@4ڻOss╲h|7~^T#}q2.X揜~;tǏf^7V)uo9QaԻOgZڧ#E=[Ir(D48w%i 5Q`<ѷ9hiz[ wJ( -U浥GN" gB dCl`9+T% /y,iz"eJ^z@o@7̵ĐCܷze>iIh ҧhC"(Iq8F (%T0X7c9  z_st\o p{_P3\k嚷O =0/!/ƵU Ws*y];h v,+h*OV_`Kkwؖ1 ޅ4Y,: K®^3vgy¢>r|>jېaToayr\( }W8Du,V^\^1k&lNi>uH?Nf0 yew ~qgPlUaiޫH^ċ{[|jry9u-Mm􇝦F&g%Mگ:$%Oޱg:g`Qpg Gqjcׯ6# I2$W}粒b<]OF^_fgHviٲOΗk5ןZhէ6aIէ}!\]}-Bl͋hZ]Ӛ\xMk* hRB]d˰AGqF~g@W5&Yr @i\Ybm03Γ5FnMReUV񾲲~&[[~~}^? ~h>dhp1*+=5Fcw6!zm&LM .3A )0Nal=TE)"p%V-AO2/rw.{m'E[ypAZ gJEZ V >]iITA~ #hUJc.V`')W)XP(vbh5F>u} prN>w]:jCK{eNhFqu_:kM$m[#w/ۙr2;ÀQ];f=̕OHv\?~}c[B~K{5FdcuhoO؉9x@I5hRbp鰁s4^,ѱp*ߺѥ_[HQ*bZpT~0p^޳@zy~i{ [)a^{Om `#ۮo^T76lXl GޝjWO{]T';};SC oخh)n}u{}ջ\],컿"g?9 'UjlNއy ݓv!&ی-w)[k)~[k7Q}Jqki~ld"tէTMK_{ꘂjKEXTRmy*7-}Z*E;-\c9TvZ:7[]X^FglqIvS!Eϓiř>~<}y24t O2f2h|&ʅ.IJl7Ń,_.`:HYWL2Uf\/ Ԓ !00Јxia;x0\]AhOR{0/u+lo$66/j%8{ca㻷C3CFa )` **RlHY5OpB{<2=I߻B≩X0&n8q=4ܱ_4*LsZY61HH:c y<繱FVl ub_>Cb4 i3%stl y-y}JكYfgJpGV`lJXg"Qh+[@]iwž+ 2M>1m&OrP0Hu 2o DgI`&8B2A5c` "!'"]qoսvsODGO>2>݌̍(P4y/J` ')ˎ.O|]Ȏ[4 nL<$} ,SPh+tdSl](I@pEvȣhO x&e&2OG ݋cntjɹy1R,ٌ5'!fct5JLgWPR$ʨjYlT1fJr {OEf(0jli6ų;C4GBy0=9D+$4=u3S0ADJF8| $޳ e!F_N[}-|aU$X Zd<BeƘQ2 j(DmF:^}2z&Y9MI9 ) q% G|ӲB>FٻyEo.EPfu,:&HA9fYErzDyz^jQ"p3`~C.~eqt=5k oGIv2Yu BR(QbfS=K'y 9Rs5Ɓ4;sd)-/3)]˻=f75<,r, r sTF@ !R]nH-z>ˣHo_s$|y:8 .e9hP|*!c,;v%&2s/ta2R)Aa$(Gw묱kSҋxoȒ1t3(8!klǽQ%~MTRm3oW]0-+F9͎ 믆d;Lsl98ܒVA*%J>ץiR̦xZ#?yzx h2iw0EM|60Wjec5&gd YDHdreqaW/y5Upn5{jٴ~on|{3էT[xܯrMK-5}N&i=6d4_\4:O7i:td0/[,K:׈O JaG`LBbhvu@;OH)Fl]$@iCA_kQ9ydy7<,79_^~bE`Rŏӯ0HdSl$;` ci-¶cZ>Yxa;vz_,9e #Q[\MHFSBT`fQ(£C'ۗ}pVt:e܏SM&QHF)*30yBQțiw} ZZ#)3fdYW PKY&1ι01?{ȍ`O˼,'93dd ~-n˒V.ab-6bȺUI+V L( 6>0oH (XuxΒ΂TDŽ~e/2&LF.ӕdN{߰Q&]pY_# ie &tۯ$A5NcÅQ"f6Oϑ5 "3kX䘌9mՂsک*/H^,( Ҕ& ȼ3hm$Q:^9_xƾ*ժ຃Ӳ\OlԸVc@RZPw@>.M;uJ%#E ֲ9LBPSx\|G_#PZ'uA!ؤ16,8FhM`á,ҷ+&]Z#O%Z 7{ûq7Igu-hdKF3b19SLCîXJ፝sLwpiw\)-^WJPrK;X'IX?|/,AMdN_7",SjHmƚLͮa'Of 6si6 IkV<||c}D Z>~OS@-my[TnI~l#h PexAiCJqh6xCnD-h^^9Y[;߿-h;^Ȃ4l&hj&T=T_S=>2ctHx cJJe(g׳AZ0}*ǥ&ƾBuA<0Rlo+o͛Ӟx`央51voz*{!Lؿ|PykmťTy_PFz%RYX+cV/V殲AD5@ꋳsfĥfZXFz(R`!:n-Ü#dچ0qvtז?_QLW`ș\Fd825h%/\#yK+O-*Tf?He9I)HSw_z|}ps{O(WL%DUܙj'fvV2(ETAMr4BV lYYF9?O.B|Ϻ@ Lb!c)dwF?0,1|ePJ ,EQ s%JiV)^TA j̬VHEe#@ŜQ#+C)d#iC))ߋ^Wܳ0f&2X#eJ3᫕Z;оֻeN/d5*uQrW4h݆X"!LJ*tᖔ9 rX (Q0ԩWS7z8B8K+SYfMO^`]`[5._~t3{ #oQ|<5090~=9K%x.Ɍa ƋQo|)d:S2}'Eٻo-CA+n27 mh;{8^Re.A]J-8iP[NBd<44\**v0k ~ޥ?J4 < tȞ""3B &?A s5ypu~v4`u HV䎊O\€ӡ 2}`=>:D~%?u..cE_*έ^1QtX,3%H +*DRlY`eI+OHHY0L5J PAKtK^ӭ05cC-l+z񹭋;.[tBPzuG_'tF# ܺh(8Fug+g\gig l~@dz]-HS7 1O'Ka+ovV pn~ 1`}/Qj:읅xYk6\eaրX&[WIP{/kZW*:?kefAMOBYӫɠ_ >A9zkBl|dЙ/ ĉi42_t?-q48V'ޥEI4V%6,v;QHkv[[ :G=oz5>=f}}aF@whݷ8mrғr"S}ݩh7:vAt}GC 5=;S!!\D;8h7vAt}GSbMkwڭ y"Lʈb0kujjH$aRqa|sF+s|<c G*A:.Fn؋;F+D`$bM"Rʋ&V<1mu mju%cQs@:/Q~ "S*+SGofq+exRg`Lj[N`N4d&*Lye;%$Լ143oKc(EP*)7 A[aA՛#M ^CfԨU%*0 Hs:>h(ԕ^eO $RS:S 4D-r̼j5\RZͲ4&Dѥ9Wp& fRke`~;Y))3eQ_\Χ43^gGu>k#kVh^,ʘ]4 rBƖ҉QQt*.m#J( FJ55N8cw:(vLUtYXQ~r6we[?7M2{Umh+rKb`^ Rٍ^yvOZ-%kRWl%WLIVdmX,* {fKs! "+XXIJEUI]4BB~ӊ%RT(_(LRP1`rOU[sJ%~  Q24}8UbZ<- &BKCX_d- Š.⎐1(gVAV+jϚWi! Ƕ:l~Ce{*[3造fvڳFJhk!e_xt&zMy~~/~|0WlH˫Ѽ[][Lig {լ{A{8l8:g 6" >|%Nn29v~VW  Yb%% [_74v-d}Dް䮤m'KB-4u3R1&{T/oLWn'든AZ-LrebmF聑61gj[ZټE~ o*y(o-aZ̅7ԁ#8b9䓠әΡtRJl.x2d1] X26ZUiwF#J8e!Big]JE)Ao1(W Mɻ}R J2@;PDh -"=W*Cb+Q;(E6B;4I"R ^:ed,**cU#THWPFi^usA˃P0 贮H_ ,} ܺR%J!8h !b% ax[6 kdBr{~_V vzdpjm:bwL/Wmn2xY,{UHk7n 6gl{= O9xxCeRi]1yn&8ܮ׳iGux5Jv.yt\2>>dh&[z-ǂ Ύߍbzv%GJU~ع|DRF:bY#Z^$F0l`(JW r:Tt}G&*;\?ӆ?pMT:D y"E,{h7z6mr{ڭ{ovCB^-SV("yǟNJfMDea:쭙t(\2@NO~.'?#ޭQ5JC~7N[&A K-k.{he,V4 QJ16=޹!_׈Z*91̌ȑl(|K7B텽!M#׽o0wV7j 2,'=}:B+j.V*agFuyx=5[qKAf!k|~Aֹ|=\-c%/:' FTW\@PADK6 ƭԖӑ;W#矎w"M;a٥m!/#h`΅Uןtv\Z3kw{fF{mtMwq&0:M  9 4i`JъziiB, GVa|21.}:"$>z?^7T8 x+!8B9D0ŏ&Q,/B}ga{AL1Agcbv saعgG-> }Fv{,BƍufLv s)LaPתOoxFI_)Ch^ڕU ~.ؼ#X6WzQv/IQQOjT`-4KZWN.VHힳgQyQ bBƓJi*z=h_QviueVnlQ2'ɄOV6o7ܔud4бT鲗B8CF(1&/Gx-\!zo6+* v,X7+`sb"_m\"wX*8 q``v<ߖ~|LWq=_Xo˧I~g00*/\0zrc]"Xܺ]$n(yۨBAyKlyO:uN$+xR@O'c,^'z ހdBu̻$l 2$8 E,-O*^E] M#fi M%eZ0MaH)MDbc0RRSonӚǠ5kGܲ̽wlX/ Ys+m] ~EłxƂ 4"5wKV=]# yy_bp (?A5hIb8G IAKV map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 11:22:04.860712038 +0000 UTC m=+0.655592022,LastTimestamp:2026-01-28 11:22:04.860712038 +0000 UTC m=+0.655592022,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.881135 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.882210 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.882312 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.882381 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.882444 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.882505 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.882565 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.882645 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.883695 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.883804 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.883918 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.884551 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885291 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885324 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885404 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885431 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885446 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885464 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885480 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885494 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885511 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885529 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885548 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885571 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885590 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885609 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885631 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885651 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885697 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885716 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885732 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885750 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885764 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885780 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885794 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885810 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885826 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885841 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885860 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885896 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885913 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885930 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885947 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885976 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.885995 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886015 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886055 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886074 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886091 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886108 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886128 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886150 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886176 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886196 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886214 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886234 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886254 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886272 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886294 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886310 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886328 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886345 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886365 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886383 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886400 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886417 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886432 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886450 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886466 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886483 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886499 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886520 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886536 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886552 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886568 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886584 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886600 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886615 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886631 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886650 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886666 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886681 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886696 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886712 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886728 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886743 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886760 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886775 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886792 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886807 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886825 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886840 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886857 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886873 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886909 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886929 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886948 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886948 4804 manager.go:324] Recovery completed Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.886965 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887080 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887127 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887140 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887151 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887161 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887172 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887192 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887205 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887220 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887235 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887247 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887260 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887274 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887285 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887297 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887308 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887318 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887330 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887341 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887352 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887362 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887371 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887384 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887396 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887411 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887423 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887455 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887466 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887479 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887490 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887501 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887514 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887526 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887537 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887548 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887559 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887570 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887583 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887594 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887604 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887614 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887624 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887636 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887647 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887659 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887669 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887681 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887690 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887699 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887709 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887722 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887731 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887741 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887750 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887759 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887769 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887780 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887792 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887806 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887816 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887828 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887839 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887852 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887864 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887874 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887905 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887920 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.887937 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.892893 4804 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.892947 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.892967 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.892980 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.892993 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893005 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893020 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893055 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893067 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893077 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893088 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893100 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893111 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893121 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893131 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893142 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893152 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893165 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893176 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893185 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893195 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893207 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893217 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893226 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893236 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893246 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893255 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893265 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893275 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893287 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893299 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893313 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893326 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893372 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893386 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893398 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893409 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893420 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893432 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893443 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893454 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893465 4804 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893476 4804 reconstruct.go:97] "Volume reconstruction finished" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.893484 4804 reconciler.go:26] "Reconciler: start to sync state" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.898838 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.900847 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.900906 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.900922 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.901625 4804 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.901641 4804 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.901657 4804 state_mem.go:36] "Initialized new in-memory state store" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.912000 4804 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.913698 4804 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.913733 4804 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.913753 4804 kubelet.go:2335] "Starting kubelet main sync loop" Jan 28 11:22:04 crc kubenswrapper[4804]: E0128 11:22:04.913792 4804 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.915319 4804 policy_none.go:49] "None policy: Start" Jan 28 11:22:04 crc kubenswrapper[4804]: W0128 11:22:04.917694 4804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:04 crc kubenswrapper[4804]: E0128 11:22:04.917783 4804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.918418 4804 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.918445 4804 state_mem.go:35] "Initializing new in-memory state store" Jan 28 11:22:04 crc kubenswrapper[4804]: E0128 11:22:04.965710 4804 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.971088 4804 manager.go:334] "Starting Device Plugin manager" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.971195 4804 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.971213 4804 server.go:79] "Starting device plugin registration server" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.971686 4804 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.971709 4804 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.971922 4804 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.972019 4804 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 28 11:22:04 crc kubenswrapper[4804]: I0128 11:22:04.972037 4804 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 28 11:22:04 crc kubenswrapper[4804]: E0128 11:22:04.978650 4804 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.013922 4804 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.014071 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.015330 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.015366 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.015377 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.015543 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.015952 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.016036 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.016867 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.016913 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.016923 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.017074 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.017273 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.017316 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.017580 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.017619 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.017632 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.018532 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.018558 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.018570 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.018766 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.018798 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.018809 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.018941 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.019103 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.019134 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.019747 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.019766 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.019788 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.019868 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.019930 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.019950 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.020422 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.020442 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.020449 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.020714 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.020739 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.020748 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.020860 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.020891 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.020899 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.021020 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.021044 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.021819 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.021839 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.021847 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: E0128 11:22:05.069559 4804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="400ms" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.071816 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.073067 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.073132 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.073148 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.073189 4804 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 11:22:05 crc kubenswrapper[4804]: E0128 11:22:05.073987 4804 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.27:6443: connect: connection refused" node="crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096115 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096160 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096181 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096200 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096219 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096234 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096249 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096269 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096382 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096430 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096469 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096581 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096717 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096739 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.096774 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198644 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198713 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198738 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198761 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198786 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198806 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198829 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198910 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198940 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198965 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198969 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199010 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198990 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199105 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199125 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199144 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199106 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199187 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199189 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198928 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199141 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199042 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199214 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199161 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199127 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199234 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199145 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199233 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.198976 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.199052 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.274979 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.276107 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.276138 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.276150 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.276175 4804 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 11:22:05 crc kubenswrapper[4804]: E0128 11:22:05.276564 4804 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.27:6443: connect: connection refused" node="crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.360189 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.368506 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.374275 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: W0128 11:22:05.399945 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-69a8bf24342144c68594b960d32a0a5763f3059974fdb54fe19cc2be48c9696b WatchSource:0}: Error finding container 69a8bf24342144c68594b960d32a0a5763f3059974fdb54fe19cc2be48c9696b: Status 404 returned error can't find the container with id 69a8bf24342144c68594b960d32a0a5763f3059974fdb54fe19cc2be48c9696b Jan 28 11:22:05 crc kubenswrapper[4804]: W0128 11:22:05.401089 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-2f32acf3744266b6cdad33a58d4549231a2a17c2f8ceea528c00a746c9d348f4 WatchSource:0}: Error finding container 2f32acf3744266b6cdad33a58d4549231a2a17c2f8ceea528c00a746c9d348f4: Status 404 returned error can't find the container with id 2f32acf3744266b6cdad33a58d4549231a2a17c2f8ceea528c00a746c9d348f4 Jan 28 11:22:05 crc kubenswrapper[4804]: W0128 11:22:05.403877 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-24f9c06936dca3e5060ab5bf7ab2a4c48db5c6590d8da3d1c5f7a7c5edec7941 WatchSource:0}: Error finding container 24f9c06936dca3e5060ab5bf7ab2a4c48db5c6590d8da3d1c5f7a7c5edec7941: Status 404 returned error can't find the container with id 24f9c06936dca3e5060ab5bf7ab2a4c48db5c6590d8da3d1c5f7a7c5edec7941 Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.405403 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.411266 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 28 11:22:05 crc kubenswrapper[4804]: W0128 11:22:05.417920 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-6bf5c899aabdc28fad671b9efcce89c4423086faf052023550b45fe941eaafe1 WatchSource:0}: Error finding container 6bf5c899aabdc28fad671b9efcce89c4423086faf052023550b45fe941eaafe1: Status 404 returned error can't find the container with id 6bf5c899aabdc28fad671b9efcce89c4423086faf052023550b45fe941eaafe1 Jan 28 11:22:05 crc kubenswrapper[4804]: W0128 11:22:05.432298 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-20d319a0c54a801cc25791813db99213f1dc87ee71f36201e9e4f6d1a99c5356 WatchSource:0}: Error finding container 20d319a0c54a801cc25791813db99213f1dc87ee71f36201e9e4f6d1a99c5356: Status 404 returned error can't find the container with id 20d319a0c54a801cc25791813db99213f1dc87ee71f36201e9e4f6d1a99c5356 Jan 28 11:22:05 crc kubenswrapper[4804]: E0128 11:22:05.470175 4804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="800ms" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.677257 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.679146 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.679206 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.679218 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.679250 4804 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 11:22:05 crc kubenswrapper[4804]: E0128 11:22:05.679883 4804 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.27:6443: connect: connection refused" node="crc" Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.862162 4804 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.865230 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 06:44:32.319393629 +0000 UTC Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.920053 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"6bf5c899aabdc28fad671b9efcce89c4423086faf052023550b45fe941eaafe1"} Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.921911 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2f32acf3744266b6cdad33a58d4549231a2a17c2f8ceea528c00a746c9d348f4"} Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.922872 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"24f9c06936dca3e5060ab5bf7ab2a4c48db5c6590d8da3d1c5f7a7c5edec7941"} Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.923725 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"69a8bf24342144c68594b960d32a0a5763f3059974fdb54fe19cc2be48c9696b"} Jan 28 11:22:05 crc kubenswrapper[4804]: I0128 11:22:05.924551 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"20d319a0c54a801cc25791813db99213f1dc87ee71f36201e9e4f6d1a99c5356"} Jan 28 11:22:06 crc kubenswrapper[4804]: W0128 11:22:06.187360 4804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:06 crc kubenswrapper[4804]: E0128 11:22:06.187818 4804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 28 11:22:06 crc kubenswrapper[4804]: W0128 11:22:06.223223 4804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:06 crc kubenswrapper[4804]: E0128 11:22:06.223303 4804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 28 11:22:06 crc kubenswrapper[4804]: E0128 11:22:06.271877 4804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="1.6s" Jan 28 11:22:06 crc kubenswrapper[4804]: W0128 11:22:06.319536 4804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:06 crc kubenswrapper[4804]: E0128 11:22:06.319597 4804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 28 11:22:06 crc kubenswrapper[4804]: W0128 11:22:06.469765 4804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:06 crc kubenswrapper[4804]: E0128 11:22:06.469878 4804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.480566 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.482387 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.482422 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.482438 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.482463 4804 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 11:22:06 crc kubenswrapper[4804]: E0128 11:22:06.482945 4804 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.27:6443: connect: connection refused" node="crc" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.861898 4804 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.866083 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-06 17:27:04.884503652 +0000 UTC Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.911270 4804 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 11:22:06 crc kubenswrapper[4804]: E0128 11:22:06.912506 4804 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.931038 4804 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb" exitCode=0 Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.931138 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb"} Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.931150 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.931999 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.932050 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.932064 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.933473 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7"} Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.933516 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a"} Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.933524 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.933532 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b"} Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.933637 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18"} Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.934428 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.934473 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.934487 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.934918 4804 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1" exitCode=0 Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.934946 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1"} Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.935082 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.935800 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.935818 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.935826 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.936121 4804 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef" exitCode=0 Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.936173 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef"} Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.936234 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.937359 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.937374 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.937398 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.937409 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.937712 4804 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="7540e973732facca4f893f6091ee46a0a9aca077a48c75dfec8d5a4f8816cfb0" exitCode=0 Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.937741 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"7540e973732facca4f893f6091ee46a0a9aca077a48c75dfec8d5a4f8816cfb0"} Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.937775 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.938136 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.938161 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.938172 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.938519 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.938541 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:06 crc kubenswrapper[4804]: I0128 11:22:06.938550 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.862002 4804 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.866335 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 23:10:16.011717552 +0000 UTC Jan 28 11:22:07 crc kubenswrapper[4804]: E0128 11:22:07.873120 4804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="3.2s" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.944579 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"af8f4d9cbcd2486a41c7ef311707360b23e4c873cccb7bc35b75f90bf9831039"} Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.944649 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.947204 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a60bccbe754db806af285441aba84e70a6ab1207b062fc7ca63c03bc764cf659"} Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.947249 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a2193a343b06edebc38cfa6baf9f72fe1872007ccd12e45f66b7b1bf514e3461"} Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.947261 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"e43bf8b668ba9a105df4e870222934240db37ba251c6a7a0edf2b1906f3ff986"} Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.947364 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.948119 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.948147 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.948156 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.948907 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.948925 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.948934 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.951674 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051"} Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.951702 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029"} Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.951713 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81"} Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.951724 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e"} Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.953453 4804 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c" exitCode=0 Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.953553 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.953985 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.954250 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c"} Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.954581 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.954601 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.954611 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.955127 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.955144 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:07 crc kubenswrapper[4804]: I0128 11:22:07.955152 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:07 crc kubenswrapper[4804]: W0128 11:22:07.957219 4804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:07 crc kubenswrapper[4804]: E0128 11:22:07.957334 4804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.083527 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.084591 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.084620 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.084629 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.084675 4804 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 11:22:08 crc kubenswrapper[4804]: E0128 11:22:08.085167 4804 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.27:6443: connect: connection refused" node="crc" Jan 28 11:22:08 crc kubenswrapper[4804]: W0128 11:22:08.088932 4804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:08 crc kubenswrapper[4804]: E0128 11:22:08.091035 4804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 28 11:22:08 crc kubenswrapper[4804]: W0128 11:22:08.560475 4804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.27:6443: connect: connection refused Jan 28 11:22:08 crc kubenswrapper[4804]: E0128 11:22:08.560627 4804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.27:6443: connect: connection refused" logger="UnhandledError" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.866785 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 13:21:50.922248677 +0000 UTC Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.961574 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b"} Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.961711 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.963024 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.963090 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.963105 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.965934 4804 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c" exitCode=0 Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.966013 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c"} Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.966059 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.966100 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.966136 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.966201 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.968103 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.968127 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.968136 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.968166 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.968206 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.968234 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.968239 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.968371 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:08 crc kubenswrapper[4804]: I0128 11:22:08.968209 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.867018 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-21 12:47:23.918256267 +0000 UTC Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.945987 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.973626 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad"} Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.973669 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.973697 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418"} Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.973714 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0"} Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.973727 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8"} Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.973730 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.973740 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01"} Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.973717 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.974753 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.974783 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.974795 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.974844 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.974867 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:09 crc kubenswrapper[4804]: I0128 11:22:09.974876 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.223665 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.223851 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.225183 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.225265 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.225295 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.867724 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 23:56:48.62020736 +0000 UTC Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.976732 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.976784 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.976806 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.978531 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.978643 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.978667 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.978961 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.979022 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:10 crc kubenswrapper[4804]: I0128 11:22:10.979050 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.247829 4804 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.286254 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.288733 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.288799 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.288821 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.288861 4804 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.322229 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.322557 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.324292 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.324384 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.324410 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.328580 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.867942 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 22:40:13.553580864 +0000 UTC Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.880318 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.979218 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.980359 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.980395 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:11 crc kubenswrapper[4804]: I0128 11:22:11.980406 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:12 crc kubenswrapper[4804]: I0128 11:22:12.688623 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:12 crc kubenswrapper[4804]: I0128 11:22:12.868693 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 05:01:18.626430416 +0000 UTC Jan 28 11:22:12 crc kubenswrapper[4804]: I0128 11:22:12.982846 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:22:12 crc kubenswrapper[4804]: I0128 11:22:12.982955 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:12 crc kubenswrapper[4804]: I0128 11:22:12.984337 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:12 crc kubenswrapper[4804]: I0128 11:22:12.984414 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:12 crc kubenswrapper[4804]: I0128 11:22:12.984435 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.076572 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.077209 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.077461 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.079195 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.079259 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.079283 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.869188 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 21:33:29.993086529 +0000 UTC Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.925509 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.925825 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.927963 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.928023 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.928040 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.985555 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.985611 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.986647 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.986686 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:13 crc kubenswrapper[4804]: I0128 11:22:13.986699 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.009397 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.009676 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.011302 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.011347 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.011365 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.137177 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.311795 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.312003 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.313967 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.314021 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.314039 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.869978 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 13:56:49.097254149 +0000 UTC Jan 28 11:22:14 crc kubenswrapper[4804]: E0128 11:22:14.978737 4804 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.987550 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.988749 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.988781 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:14 crc kubenswrapper[4804]: I0128 11:22:14.988790 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:15 crc kubenswrapper[4804]: I0128 11:22:15.689203 4804 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 11:22:15 crc kubenswrapper[4804]: I0128 11:22:15.689304 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 11:22:15 crc kubenswrapper[4804]: I0128 11:22:15.870996 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 06:43:05.735931818 +0000 UTC Jan 28 11:22:16 crc kubenswrapper[4804]: I0128 11:22:16.871647 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 23:29:10.441157274 +0000 UTC Jan 28 11:22:17 crc kubenswrapper[4804]: I0128 11:22:17.872438 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 21:11:59.098011467 +0000 UTC Jan 28 11:22:18 crc kubenswrapper[4804]: W0128 11:22:18.774043 4804 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 28 11:22:18 crc kubenswrapper[4804]: I0128 11:22:18.774459 4804 trace.go:236] Trace[939492508]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 11:22:08.772) (total time: 10002ms): Jan 28 11:22:18 crc kubenswrapper[4804]: Trace[939492508]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:22:18.774) Jan 28 11:22:18 crc kubenswrapper[4804]: Trace[939492508]: [10.002320027s] [10.002320027s] END Jan 28 11:22:18 crc kubenswrapper[4804]: E0128 11:22:18.774481 4804 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 28 11:22:18 crc kubenswrapper[4804]: I0128 11:22:18.862858 4804 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 28 11:22:18 crc kubenswrapper[4804]: I0128 11:22:18.873387 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 01:30:53.75753583 +0000 UTC Jan 28 11:22:18 crc kubenswrapper[4804]: I0128 11:22:18.962344 4804 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 11:22:18 crc kubenswrapper[4804]: I0128 11:22:18.962428 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 11:22:18 crc kubenswrapper[4804]: I0128 11:22:18.969856 4804 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 28 11:22:18 crc kubenswrapper[4804]: I0128 11:22:18.969947 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 28 11:22:18 crc kubenswrapper[4804]: I0128 11:22:18.998469 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 11:22:19 crc kubenswrapper[4804]: I0128 11:22:19.000367 4804 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b" exitCode=255 Jan 28 11:22:19 crc kubenswrapper[4804]: I0128 11:22:19.000435 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b"} Jan 28 11:22:19 crc kubenswrapper[4804]: I0128 11:22:19.000812 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:19 crc kubenswrapper[4804]: I0128 11:22:19.003629 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:19 crc kubenswrapper[4804]: I0128 11:22:19.003671 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:19 crc kubenswrapper[4804]: I0128 11:22:19.003686 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:19 crc kubenswrapper[4804]: I0128 11:22:19.004203 4804 scope.go:117] "RemoveContainer" containerID="dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b" Jan 28 11:22:19 crc kubenswrapper[4804]: I0128 11:22:19.873795 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 04:02:02.545051435 +0000 UTC Jan 28 11:22:20 crc kubenswrapper[4804]: I0128 11:22:20.006459 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 11:22:20 crc kubenswrapper[4804]: I0128 11:22:20.009096 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e"} Jan 28 11:22:20 crc kubenswrapper[4804]: I0128 11:22:20.009418 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:20 crc kubenswrapper[4804]: I0128 11:22:20.010997 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:20 crc kubenswrapper[4804]: I0128 11:22:20.011040 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:20 crc kubenswrapper[4804]: I0128 11:22:20.011059 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:20 crc kubenswrapper[4804]: I0128 11:22:20.874947 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 12:47:10.722942249 +0000 UTC Jan 28 11:22:21 crc kubenswrapper[4804]: I0128 11:22:21.876039 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 17:19:29.104614141 +0000 UTC Jan 28 11:22:22 crc kubenswrapper[4804]: I0128 11:22:22.876497 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 16:08:35.909577924 +0000 UTC Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.083485 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.083636 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.083764 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.084853 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.084916 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.084932 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.089114 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.113709 4804 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.859689 4804 apiserver.go:52] "Watching apiserver" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.866261 4804 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.866470 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb"] Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.866725 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.866913 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.867000 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.867074 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 11:22:23 crc kubenswrapper[4804]: E0128 11:22:23.867073 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.867149 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.867264 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:23 crc kubenswrapper[4804]: E0128 11:22:23.867286 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:23 crc kubenswrapper[4804]: E0128 11:22:23.867304 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.869061 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.870106 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.870210 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.870267 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.872392 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.872482 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.872859 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.874532 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.876579 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 12:03:58.773462429 +0000 UTC Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.883012 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.926512 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.938154 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.949086 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.949215 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.960210 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.960909 4804 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 11:22:23 crc kubenswrapper[4804]: E0128 11:22:23.961128 4804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.961357 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.962604 4804 trace.go:236] Trace[1821295227]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 11:22:13.512) (total time: 10450ms): Jan 28 11:22:23 crc kubenswrapper[4804]: Trace[1821295227]: ---"Objects listed" error: 10450ms (11:22:23.962) Jan 28 11:22:23 crc kubenswrapper[4804]: Trace[1821295227]: [10.450077397s] [10.450077397s] END Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.962630 4804 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.964483 4804 trace.go:236] Trace[211428802]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (28-Jan-2026 11:22:13.329) (total time: 10635ms): Jan 28 11:22:23 crc kubenswrapper[4804]: Trace[211428802]: ---"Objects listed" error: 10634ms (11:22:23.964) Jan 28 11:22:23 crc kubenswrapper[4804]: Trace[211428802]: [10.635098218s] [10.635098218s] END Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.964523 4804 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 11:22:23 crc kubenswrapper[4804]: E0128 11:22:23.964782 4804 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.965242 4804 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.965417 4804 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.967572 4804 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.967998 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.972217 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.981007 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.982645 4804 csr.go:261] certificate signing request csr-rjlfw is approved, waiting to be issued Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.990564 4804 csr.go:257] certificate signing request csr-rjlfw is issued Jan 28 11:22:23 crc kubenswrapper[4804]: I0128 11:22:23.990718 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.001451 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.009949 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.012416 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.016706 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.023578 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.025299 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.027253 4804 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-crc\" already exists" pod="openshift-etcd/etcd-crc" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.033579 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.052380 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.064463 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.065728 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.065781 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.065816 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.065843 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.065861 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.065909 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.065929 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.065948 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066018 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066177 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066198 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066267 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066482 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066531 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066579 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066666 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066679 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066833 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066908 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066948 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066975 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066977 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.066985 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067002 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067002 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067075 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067097 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067101 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067141 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067179 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067144 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067210 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067238 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067264 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067330 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067383 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067416 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067440 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067464 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067469 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067488 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067522 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067481 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.067551 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:22:24.567531116 +0000 UTC m=+20.362411110 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067815 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067818 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067742 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067840 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067782 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067743 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067861 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067876 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067915 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067938 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067958 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067961 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068178 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068204 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068225 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068248 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068274 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068302 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068329 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.067975 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068347 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068367 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068386 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068406 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068422 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068443 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068463 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068488 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068511 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068535 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068558 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068581 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068602 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068626 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068648 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068720 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068745 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068767 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068789 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068809 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068830 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068849 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068906 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068941 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068963 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068984 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069004 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069025 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069048 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069073 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069097 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069119 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069139 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069164 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069184 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069235 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069257 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069280 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069302 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069324 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069351 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069374 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069414 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069436 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069459 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069490 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069514 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069538 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069563 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069586 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069627 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069654 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069680 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069709 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069732 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069782 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069807 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069830 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069856 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069878 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069919 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069947 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069968 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069985 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070014 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070039 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070062 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070084 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070106 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070127 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070148 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070178 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070202 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070223 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068294 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068318 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068316 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068364 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068546 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068849 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068873 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.068999 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069001 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069152 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069213 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069305 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069429 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069477 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069555 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069573 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069912 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.069998 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070549 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070661 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070851 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.070982 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.071053 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.071322 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.071487 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.071512 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.071614 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.071617 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.071668 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.072067 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.072227 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.072882 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.072762 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.073273 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.073141 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.073786 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.073889 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.073823 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074021 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074084 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074106 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074142 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074147 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074160 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074181 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074196 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074200 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074257 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074276 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074329 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074344 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074352 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074398 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074423 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074444 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074497 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074517 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074532 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074537 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074568 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074589 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074607 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074624 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074644 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074688 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074705 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074720 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074755 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074771 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074787 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074803 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074819 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074835 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074850 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074873 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074907 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074924 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074946 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074965 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074981 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075001 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075018 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075044 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075060 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075078 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075094 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075114 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075911 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075938 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075958 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075978 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075995 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076013 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076030 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076045 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076061 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076076 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076094 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076110 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076129 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076148 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076167 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076184 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076200 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076216 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076234 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076251 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076268 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076284 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076304 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076321 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076337 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076353 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076370 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076387 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076406 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076423 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076450 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076475 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076496 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076519 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076545 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076568 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076826 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076867 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076914 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076966 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076987 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077011 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077029 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077090 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077133 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077155 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077175 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077194 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077213 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077234 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077251 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077313 4804 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077325 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077335 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077348 4804 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077359 4804 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077368 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077378 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077386 4804 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077396 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077407 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077440 4804 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077454 4804 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077465 4804 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077741 4804 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077758 4804 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077769 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077778 4804 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077787 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077796 4804 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077806 4804 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077819 4804 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077828 4804 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077837 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077847 4804 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077858 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077998 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078011 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078020 4804 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078029 4804 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078037 4804 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078046 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078055 4804 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078064 4804 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078073 4804 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078558 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078573 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078587 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078600 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078610 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078619 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078630 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078642 4804 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078653 4804 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078665 4804 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078676 4804 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078691 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078702 4804 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078713 4804 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078723 4804 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078732 4804 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078741 4804 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078750 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078763 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078772 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078782 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078791 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078801 4804 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078810 4804 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078819 4804 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078828 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078837 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078849 4804 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078895 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078906 4804 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078915 4804 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.080417 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.084913 4804 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.074563 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.075694 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076077 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076145 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076178 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076385 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076383 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.076529 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077099 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077188 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077305 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077383 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077402 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077567 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077603 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077795 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077997 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078042 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078467 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078502 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078773 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078944 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.078985 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.079120 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.077639 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.079314 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.079921 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.081744 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.081805 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.081991 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.082321 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.082364 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.082499 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.082681 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.082730 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.083839 4804 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.087552 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:24.587535495 +0000 UTC m=+20.382415479 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.087863 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.087871 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.085823 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.086199 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.087176 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.087187 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.088162 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.088213 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.089098 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.089518 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.089519 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.089557 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.090100 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.090147 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.090441 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.090535 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.090867 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.090923 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.090926 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.091042 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.091176 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.091258 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.091283 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.091349 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.091598 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.091638 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.091899 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.091944 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.092170 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.092186 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.093066 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.093317 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.093580 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.093799 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.094163 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.094554 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.094650 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.094690 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.095225 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.095406 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.095483 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.095579 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.083953 4804 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.095739 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:24.595722083 +0000 UTC m=+20.390602057 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.095742 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.096347 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.096370 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.096468 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.096514 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.096831 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.096874 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.097217 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.098004 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.097397 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.098205 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.098274 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.098335 4804 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.098440 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:24.598423618 +0000 UTC m=+20.393303602 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.098595 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.098778 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.098804 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.098816 4804 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.098824 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.098834 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.098855 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:24.598842921 +0000 UTC m=+20.393722965 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.099051 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.099345 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.099630 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.100762 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.101201 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.101920 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.102007 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.102651 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.103068 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.103737 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.103828 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.103996 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.104039 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.104155 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.105382 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.107162 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.107488 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.108403 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.108438 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.108912 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.109083 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.109382 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.109500 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.109495 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.110540 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.110702 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.111077 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.111094 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.111562 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.111645 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.112419 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.113059 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.113334 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.113642 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.113781 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.114377 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.114415 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.114643 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.115453 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.116275 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.120114 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.120314 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.120582 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.120717 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.128017 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.129404 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.133302 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.139947 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.142616 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.145750 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.152658 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.162630 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.167738 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.175316 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-r6hvc"] Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.175629 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-r6hvc" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.176954 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.178563 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.179255 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181259 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181289 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181365 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181380 4804 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181396 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181408 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181421 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181432 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181442 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181450 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181458 4804 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181465 4804 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181475 4804 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181485 4804 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181496 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181507 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181517 4804 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181528 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181539 4804 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181540 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181549 4804 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181571 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181582 4804 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181593 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181604 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181616 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181627 4804 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181639 4804 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181650 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181662 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181674 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181685 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181696 4804 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181742 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181754 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181764 4804 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181775 4804 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181786 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181796 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181806 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181817 4804 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181828 4804 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181841 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181851 4804 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181861 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181872 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181903 4804 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181914 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181925 4804 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181936 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181946 4804 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181968 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181979 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.181998 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182011 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182022 4804 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182034 4804 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182045 4804 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182056 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182067 4804 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182077 4804 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182087 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182099 4804 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182111 4804 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182123 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182135 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182146 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182158 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182169 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182180 4804 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182193 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182207 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182220 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182231 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182242 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182254 4804 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182267 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182277 4804 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182288 4804 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182299 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182311 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182321 4804 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182332 4804 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182343 4804 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182354 4804 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182366 4804 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182377 4804 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182388 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182398 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182410 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182421 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182433 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182445 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182457 4804 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182469 4804 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182480 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182493 4804 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182505 4804 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182516 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182617 4804 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182632 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182643 4804 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182653 4804 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182664 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182675 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182686 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182696 4804 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182707 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182717 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182729 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182739 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182750 4804 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182767 4804 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182777 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182788 4804 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182799 4804 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182809 4804 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182820 4804 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182831 4804 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182842 4804 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182853 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182865 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182880 4804 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182915 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182926 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182937 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182948 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182960 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182970 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182981 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.182992 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.183002 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.183012 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.188002 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.193375 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.194489 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.195675 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.208983 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.212274 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.250045 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.266011 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.284559 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8e616d20-36f4-4d59-9ce0-d2e18fd63902-hosts-file\") pod \"node-resolver-r6hvc\" (UID: \"8e616d20-36f4-4d59-9ce0-d2e18fd63902\") " pod="openshift-dns/node-resolver-r6hvc" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.284779 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.284973 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9kmbz\" (UniqueName: \"kubernetes.io/projected/8e616d20-36f4-4d59-9ce0-d2e18fd63902-kube-api-access-9kmbz\") pod \"node-resolver-r6hvc\" (UID: \"8e616d20-36f4-4d59-9ce0-d2e18fd63902\") " pod="openshift-dns/node-resolver-r6hvc" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.294855 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.305899 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.318912 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.330742 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.341096 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.350788 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.371810 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.382520 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.386940 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9kmbz\" (UniqueName: \"kubernetes.io/projected/8e616d20-36f4-4d59-9ce0-d2e18fd63902-kube-api-access-9kmbz\") pod \"node-resolver-r6hvc\" (UID: \"8e616d20-36f4-4d59-9ce0-d2e18fd63902\") " pod="openshift-dns/node-resolver-r6hvc" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.386978 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8e616d20-36f4-4d59-9ce0-d2e18fd63902-hosts-file\") pod \"node-resolver-r6hvc\" (UID: \"8e616d20-36f4-4d59-9ce0-d2e18fd63902\") " pod="openshift-dns/node-resolver-r6hvc" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.387057 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/8e616d20-36f4-4d59-9ce0-d2e18fd63902-hosts-file\") pod \"node-resolver-r6hvc\" (UID: \"8e616d20-36f4-4d59-9ce0-d2e18fd63902\") " pod="openshift-dns/node-resolver-r6hvc" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.406699 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9kmbz\" (UniqueName: \"kubernetes.io/projected/8e616d20-36f4-4d59-9ce0-d2e18fd63902-kube-api-access-9kmbz\") pod \"node-resolver-r6hvc\" (UID: \"8e616d20-36f4-4d59-9ce0-d2e18fd63902\") " pod="openshift-dns/node-resolver-r6hvc" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.506671 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-r6hvc" Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.517296 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e616d20_36f4_4d59_9ce0_d2e18fd63902.slice/crio-43bd772366ba3f66702e1f1064a93e87bd8e3a43e18cd87ee67a244dfd5e34c0 WatchSource:0}: Error finding container 43bd772366ba3f66702e1f1064a93e87bd8e3a43e18cd87ee67a244dfd5e34c0: Status 404 returned error can't find the container with id 43bd772366ba3f66702e1f1064a93e87bd8e3a43e18cd87ee67a244dfd5e34c0 Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.588594 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.588673 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.588732 4804 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.588759 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:22:25.588731195 +0000 UTC m=+21.383611179 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.588794 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:25.588785087 +0000 UTC m=+21.383665071 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.690128 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.690180 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.690204 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.690318 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.690339 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.690351 4804 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.690361 4804 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.690412 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:25.690393874 +0000 UTC m=+21.485273848 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.690375 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.690542 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.690563 4804 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.690461 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:25.690441686 +0000 UTC m=+21.485321670 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.690640 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:25.690616281 +0000 UTC m=+21.485496265 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.737843 4804 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738208 4804 reflector.go:484] object-"openshift-network-operator"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738245 4804 reflector.go:484] object-"openshift-network-node-identity"/"ovnkube-identity-cm": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"ovnkube-identity-cm": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738293 4804 reflector.go:484] object-"openshift-network-node-identity"/"env-overrides": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"env-overrides": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738314 4804 reflector.go:484] object-"openshift-network-operator"/"iptables-alerter-script": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"iptables-alerter-script": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: E0128 11:22:24.738336 4804 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.27:49756->38.102.83.27:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188ee135f383f2b5 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 11:22:05.438210741 +0000 UTC m=+1.233090725,LastTimestamp:2026-01-28 11:22:05.438210741 +0000 UTC m=+1.233090725,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738312 4804 reflector.go:484] object-"openshift-network-operator"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-operator"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738322 4804 reflector.go:484] object-"openshift-dns"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738329 4804 reflector.go:484] object-"openshift-network-operator"/"metrics-tls": watch of *v1.Secret ended with: very short watch: object-"openshift-network-operator"/"metrics-tls": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738342 4804 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738343 4804 reflector.go:484] object-"openshift-dns"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-dns"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738361 4804 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738360 4804 reflector.go:484] object-"openshift-network-node-identity"/"openshift-service-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"openshift-service-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738381 4804 reflector.go:484] object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": watch of *v1.Secret ended with: very short watch: object-"openshift-dns"/"node-resolver-dockercfg-kz9s7": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738399 4804 reflector.go:484] object-"openshift-network-node-identity"/"kube-root-ca.crt": watch of *v1.ConfigMap ended with: very short watch: object-"openshift-network-node-identity"/"kube-root-ca.crt": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738415 4804 reflector.go:484] object-"openshift-network-node-identity"/"network-node-identity-cert": watch of *v1.Secret ended with: very short watch: object-"openshift-network-node-identity"/"network-node-identity-cert": Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: W0128 11:22:24.738435 4804 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.877274 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 18:14:09.77436866 +0000 UTC Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.919083 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.919610 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.920832 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.921514 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.922088 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.922617 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.923216 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.923750 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.924425 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.925014 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.925569 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.926244 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.926700 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.927237 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.927764 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.931280 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.931956 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.932772 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.933387 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.934146 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.935330 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.937155 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.938250 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.938689 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.939675 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.940131 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.940706 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.942024 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.942488 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.945181 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.945677 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.946947 4804 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.947112 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.949292 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.950318 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.950735 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.951999 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.952824 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.953503 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.954363 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.955714 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.956803 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.958996 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.960301 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.961303 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.962507 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.963037 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.966531 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.968343 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.968915 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.970317 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.970805 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.971674 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.972430 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.972982 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.974021 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.974493 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.984248 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.991791 4804 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-28 11:17:23 +0000 UTC, rotation deadline is 2026-11-17 08:42:04.113839654 +0000 UTC Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.991857 4804 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7029h19m39.121985535s for next certificate rotation Jan 28 11:22:24 crc kubenswrapper[4804]: I0128 11:22:24.993060 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.005239 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.021935 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1a4037378ad3270a2bd06d3a9b2181ac846d876077a93ef221ea8ab024636f39"} Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.023131 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.023474 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-r6hvc" event={"ID":"8e616d20-36f4-4d59-9ce0-d2e18fd63902","Type":"ContainerStarted","Data":"e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb"} Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.023531 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-r6hvc" event={"ID":"8e616d20-36f4-4d59-9ce0-d2e18fd63902","Type":"ContainerStarted","Data":"43bd772366ba3f66702e1f1064a93e87bd8e3a43e18cd87ee67a244dfd5e34c0"} Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.024948 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4"} Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.024982 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca"} Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.024999 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"d1a209bdf59e2de8ff9d3cc4ffea06668f6befa1dbc4ff3d94c857fc02a9676b"} Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.026329 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c"} Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.026377 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"c8343671f367a0ebb3e7e9c7ae9174859228b2c5f42b039e39249b2919ced418"} Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.036652 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.051441 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.064102 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.086181 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.100570 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.120307 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.156576 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.186618 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.218945 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.248813 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.286992 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.301796 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.324967 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.596698 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.596814 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.596976 4804 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.597048 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:27.597027091 +0000 UTC m=+23.391907115 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.597170 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:22:27.597157925 +0000 UTC m=+23.392037909 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.629582 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-slkk8"] Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.630251 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-rm9ff"] Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.630406 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.630935 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-lqqmt"] Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.631081 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.631245 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-24gvs"] Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.631414 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.632345 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.633962 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.634142 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.634349 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.634481 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.634642 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.635022 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 11:22:25 crc kubenswrapper[4804]: W0128 11:22:25.635343 4804 reflector.go:561] object-"openshift-multus"/"default-dockercfg-2q5b6": failed to list *v1.Secret: secrets "default-dockercfg-2q5b6" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.635454 4804 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"default-dockercfg-2q5b6\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"default-dockercfg-2q5b6\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.635591 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.635757 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.635998 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 11:22:25 crc kubenswrapper[4804]: W0128 11:22:25.636140 4804 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl": failed to list *v1.Secret: secrets "ovn-kubernetes-node-dockercfg-pwtwl" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.636164 4804 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-pwtwl\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-kubernetes-node-dockercfg-pwtwl\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 11:22:25 crc kubenswrapper[4804]: W0128 11:22:25.636213 4804 reflector.go:561] object-"openshift-multus"/"multus-daemon-config": failed to list *v1.ConfigMap: configmaps "multus-daemon-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-multus": no relationship found between node 'crc' and this object Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.636229 4804 reflector.go:158] "Unhandled Error" err="object-\"openshift-multus\"/\"multus-daemon-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"multus-daemon-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-multus\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 11:22:25 crc kubenswrapper[4804]: W0128 11:22:25.636406 4804 reflector.go:561] object-"openshift-ovn-kubernetes"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.636499 4804 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 11:22:25 crc kubenswrapper[4804]: W0128 11:22:25.636551 4804 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-config": failed to list *v1.ConfigMap: configmaps "ovnkube-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.636704 4804 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 11:22:25 crc kubenswrapper[4804]: W0128 11:22:25.636602 4804 reflector.go:561] object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.636862 4804 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.637060 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 11:22:25 crc kubenswrapper[4804]: W0128 11:22:25.637295 4804 reflector.go:561] object-"openshift-ovn-kubernetes"/"env-overrides": failed to list *v1.ConfigMap: configmaps "env-overrides" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.637392 4804 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"env-overrides\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"env-overrides\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 11:22:25 crc kubenswrapper[4804]: W0128 11:22:25.637418 4804 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovnkube-script-lib": failed to list *v1.ConfigMap: configmaps "ovnkube-script-lib" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.637567 4804 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"ovnkube-script-lib\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 11:22:25 crc kubenswrapper[4804]: W0128 11:22:25.637846 4804 reflector.go:561] object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert": failed to list *v1.Secret: secrets "ovn-node-metrics-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-ovn-kubernetes": no relationship found between node 'crc' and this object Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.637985 4804 reflector.go:158] "Unhandled Error" err="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ovn-node-metrics-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-ovn-kubernetes\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.650098 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.667778 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.681378 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.692970 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697320 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-run-netns\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697372 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-bin\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697397 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d901be89-84b0-4249-9548-2e626a112a4c-rootfs\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697426 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-os-release\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697450 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697478 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-script-lib\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697503 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697525 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-systemd\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697617 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697684 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-systemd-units\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697709 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-config\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.697723 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.697745 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697747 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-cni-dir\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.697758 4804 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697803 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-var-lib-cni-multus\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697830 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-run-multus-certs\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.697847 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:27.697829963 +0000 UTC m=+23.492710007 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697864 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-kubelet\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697911 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-var-lib-cni-bin\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697935 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-etc-openvswitch\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697966 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.697994 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-system-cni-dir\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698020 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-openvswitch\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698042 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-ovn-kubernetes\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.698050 4804 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698075 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-55hnp\" (UniqueName: \"kubernetes.io/projected/686039c6-ae16-45ac-bb9f-4c39d57d6c80-kube-api-access-55hnp\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.698086 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:27.698077311 +0000 UTC m=+23.492957365 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698109 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-netd\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698134 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-cnibin\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698155 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d901be89-84b0-4249-9548-2e626a112a4c-mcd-auth-proxy-config\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698210 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-run-k8s-cni-cncf-io\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698248 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-conf-dir\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698287 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-np5cs\" (UniqueName: \"kubernetes.io/projected/d901be89-84b0-4249-9548-2e626a112a4c-kube-api-access-np5cs\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698320 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-cnibin\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698377 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-socket-dir-parent\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698433 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-daemon-config\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698457 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovn-node-metrics-cert\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698482 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/12825f11-ad6e-4db0-87b3-a619c0521c56-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698505 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-system-cni-dir\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698532 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-netns\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698553 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-env-overrides\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698572 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-var-lib-kubelet\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698613 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-etc-kubernetes\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698655 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vncf\" (UniqueName: \"kubernetes.io/projected/735b7edc-6f8b-4f5f-a9ca-11964dd78266-kube-api-access-2vncf\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698684 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d901be89-84b0-4249-9548-2e626a112a4c-proxy-tls\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698704 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/12825f11-ad6e-4db0-87b3-a619c0521c56-cni-binary-copy\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698726 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/735b7edc-6f8b-4f5f-a9ca-11964dd78266-cni-binary-copy\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698795 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698827 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-os-release\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698853 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bdbf\" (UniqueName: \"kubernetes.io/projected/12825f11-ad6e-4db0-87b3-a619c0521c56-kube-api-access-7bdbf\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698894 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-hostroot\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698919 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-var-lib-openvswitch\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698946 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-slash\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.698966 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-ovn\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.699013 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.699041 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.699058 4804 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.699019 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-node-log\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.699120 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:27.699099172 +0000 UTC m=+23.493979186 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.699156 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-log-socket\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.722239 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.739703 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.740449 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.753518 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.764459 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.776691 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.787956 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.794483 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.799837 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-np5cs\" (UniqueName: \"kubernetes.io/projected/d901be89-84b0-4249-9548-2e626a112a4c-kube-api-access-np5cs\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.799876 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovn-node-metrics-cert\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.799918 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-cnibin\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.799943 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-socket-dir-parent\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.799968 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-daemon-config\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.799988 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-env-overrides\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800012 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/12825f11-ad6e-4db0-87b3-a619c0521c56-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800033 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-system-cni-dir\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800053 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-netns\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800073 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d901be89-84b0-4249-9548-2e626a112a4c-proxy-tls\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800093 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/12825f11-ad6e-4db0-87b3-a619c0521c56-cni-binary-copy\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800120 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-var-lib-kubelet\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800140 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-etc-kubernetes\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800160 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2vncf\" (UniqueName: \"kubernetes.io/projected/735b7edc-6f8b-4f5f-a9ca-11964dd78266-kube-api-access-2vncf\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800126 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-socket-dir-parent\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800215 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-cnibin\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800213 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-system-cni-dir\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800211 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/735b7edc-6f8b-4f5f-a9ca-11964dd78266-cni-binary-copy\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800261 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-var-lib-kubelet\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800300 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-etc-kubernetes\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800295 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-netns\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800338 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7bdbf\" (UniqueName: \"kubernetes.io/projected/12825f11-ad6e-4db0-87b3-a619c0521c56-kube-api-access-7bdbf\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800385 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-hostroot\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800437 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-var-lib-openvswitch\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800479 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-hostroot\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800556 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-var-lib-openvswitch\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800803 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-os-release\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800818 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/12825f11-ad6e-4db0-87b3-a619c0521c56-cni-binary-copy\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800844 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-log-socket\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800877 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-slash\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800900 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-log-socket\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800938 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-ovn\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800966 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-ovn\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.800963 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/12825f11-ad6e-4db0-87b3-a619c0521c56-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801006 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-slash\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801004 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-node-log\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801031 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-node-log\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801052 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-run-netns\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801036 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-run-netns\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801080 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-bin\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801097 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-os-release\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801098 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d901be89-84b0-4249-9548-2e626a112a4c-rootfs\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801130 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-bin\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801116 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/d901be89-84b0-4249-9548-2e626a112a4c-rootfs\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801144 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801171 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-os-release\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801192 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801206 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/735b7edc-6f8b-4f5f-a9ca-11964dd78266-cni-binary-copy\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801292 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801213 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-script-lib\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801352 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-os-release\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801375 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-systemd\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801443 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-systemd\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801408 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-systemd-units\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801486 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-config\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801529 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-systemd-units\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801506 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-var-lib-cni-multus\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801603 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-run-multus-certs\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801655 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-kubelet\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801682 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-run-multus-certs\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801633 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-var-lib-cni-multus\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801685 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-cni-dir\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801761 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-var-lib-cni-bin\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801794 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-cni-dir\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801818 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-kubelet\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801821 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801848 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-var-lib-cni-bin\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801910 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-etc-openvswitch\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.801784 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-etc-openvswitch\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802216 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-system-cni-dir\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802239 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-openvswitch\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802262 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-ovn-kubernetes\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802290 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-netd\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802312 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55hnp\" (UniqueName: \"kubernetes.io/projected/686039c6-ae16-45ac-bb9f-4c39d57d6c80-kube-api-access-55hnp\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802334 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-cnibin\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802353 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d901be89-84b0-4249-9548-2e626a112a4c-mcd-auth-proxy-config\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802373 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-run-k8s-cni-cncf-io\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802390 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-conf-dir\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802438 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-conf-dir\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802464 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-system-cni-dir\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802505 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-openvswitch\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802531 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-ovn-kubernetes\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802558 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-netd\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802742 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/12825f11-ad6e-4db0-87b3-a619c0521c56-cnibin\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.802972 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/735b7edc-6f8b-4f5f-a9ca-11964dd78266-host-run-k8s-cni-cncf-io\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.803293 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d901be89-84b0-4249-9548-2e626a112a4c-mcd-auth-proxy-config\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.804140 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.816163 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d901be89-84b0-4249-9548-2e626a112a4c-proxy-tls\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.820492 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-np5cs\" (UniqueName: \"kubernetes.io/projected/d901be89-84b0-4249-9548-2e626a112a4c-kube-api-access-np5cs\") pod \"machine-config-daemon-slkk8\" (UID: \"d901be89-84b0-4249-9548-2e626a112a4c\") " pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.820853 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7bdbf\" (UniqueName: \"kubernetes.io/projected/12825f11-ad6e-4db0-87b3-a619c0521c56-kube-api-access-7bdbf\") pod \"multus-additional-cni-plugins-rm9ff\" (UID: \"12825f11-ad6e-4db0-87b3-a619c0521c56\") " pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.822906 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2vncf\" (UniqueName: \"kubernetes.io/projected/735b7edc-6f8b-4f5f-a9ca-11964dd78266-kube-api-access-2vncf\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.835407 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.847213 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.858715 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.870648 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.877070 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.878097 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 12:27:47.85024247 +0000 UTC Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.885817 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.904604 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.914930 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.915093 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.915153 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.915204 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.915252 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:25 crc kubenswrapper[4804]: E0128 11:22:25.915314 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.918119 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.944658 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.947876 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.958136 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.960061 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 11:22:25 crc kubenswrapper[4804]: W0128 11:22:25.966929 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd901be89_84b0_4249_9548_2e626a112a4c.slice/crio-d17df1a1df878c4fde3785819a02e0d1f47e99e5e3882225419dae33c059262f WatchSource:0}: Error finding container d17df1a1df878c4fde3785819a02e0d1f47e99e5e3882225419dae33c059262f: Status 404 returned error can't find the container with id d17df1a1df878c4fde3785819a02e0d1f47e99e5e3882225419dae33c059262f Jan 28 11:22:25 crc kubenswrapper[4804]: W0128 11:22:25.968800 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12825f11_ad6e_4db0_87b3_a619c0521c56.slice/crio-54eea5cea37a399b4d8f8d6a1e22e1b5ac7e7e77f62c1d564bf040144a483a5a WatchSource:0}: Error finding container 54eea5cea37a399b4d8f8d6a1e22e1b5ac7e7e77f62c1d564bf040144a483a5a: Status 404 returned error can't find the container with id 54eea5cea37a399b4d8f8d6a1e22e1b5ac7e7e77f62c1d564bf040144a483a5a Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.973178 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:25 crc kubenswrapper[4804]: I0128 11:22:25.985807 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.012745 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:26Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.030320 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" event={"ID":"12825f11-ad6e-4db0-87b3-a619c0521c56","Type":"ContainerStarted","Data":"54eea5cea37a399b4d8f8d6a1e22e1b5ac7e7e77f62c1d564bf040144a483a5a"} Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.032076 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"d17df1a1df878c4fde3785819a02e0d1f47e99e5e3882225419dae33c059262f"} Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.043048 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.079092 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:26Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.104919 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.124076 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.158855 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:26Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.161185 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.182328 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.222844 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.253438 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:26Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.303175 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.311305 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.447136 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.565946 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.677135 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.800931 4804 configmap.go:193] Couldn't get configMap openshift-multus/multus-daemon-config: failed to sync configmap cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.801054 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-daemon-config podName:735b7edc-6f8b-4f5f-a9ca-11964dd78266 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:27.301030214 +0000 UTC m=+23.095910198 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "multus-daemon-config" (UniqueName: "kubernetes.io/configmap/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-daemon-config") pod "multus-lqqmt" (UID: "735b7edc-6f8b-4f5f-a9ca-11964dd78266") : failed to sync configmap cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.801316 4804 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-script-lib: failed to sync configmap cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.801365 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-script-lib podName:686039c6-ae16-45ac-bb9f-4c39d57d6c80 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:27.301354695 +0000 UTC m=+23.096234679 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-script-lib" (UniqueName: "kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-script-lib") pod "ovnkube-node-24gvs" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80") : failed to sync configmap cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.801397 4804 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/env-overrides: failed to sync configmap cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.801427 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-env-overrides podName:686039c6-ae16-45ac-bb9f-4c39d57d6c80 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:27.301419317 +0000 UTC m=+23.096299301 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "env-overrides" (UniqueName: "kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-env-overrides") pod "ovnkube-node-24gvs" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80") : failed to sync configmap cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.801428 4804 secret.go:188] Couldn't get secret openshift-ovn-kubernetes/ovn-node-metrics-cert: failed to sync secret cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.801541 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovn-node-metrics-cert podName:686039c6-ae16-45ac-bb9f-4c39d57d6c80 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:27.30151976 +0000 UTC m=+23.096399734 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovn-node-metrics-cert" (UniqueName: "kubernetes.io/secret/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovn-node-metrics-cert") pod "ovnkube-node-24gvs" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80") : failed to sync secret cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.801605 4804 configmap.go:193] Couldn't get configMap openshift-ovn-kubernetes/ovnkube-config: failed to sync configmap cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.801642 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-config podName:686039c6-ae16-45ac-bb9f-4c39d57d6c80 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:27.301634524 +0000 UTC m=+23.096514628 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "ovnkube-config" (UniqueName: "kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-config") pod "ovnkube-node-24gvs" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80") : failed to sync configmap cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.817540 4804 projected.go:288] Couldn't get configMap openshift-ovn-kubernetes/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.817599 4804 projected.go:194] Error preparing data for projected volume kube-api-access-55hnp for pod openshift-ovn-kubernetes/ovnkube-node-24gvs: failed to sync configmap cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: E0128 11:22:26.817667 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/686039c6-ae16-45ac-bb9f-4c39d57d6c80-kube-api-access-55hnp podName:686039c6-ae16-45ac-bb9f-4c39d57d6c80 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:27.317648957 +0000 UTC m=+23.112528941 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-55hnp" (UniqueName: "kubernetes.io/projected/686039c6-ae16-45ac-bb9f-4c39d57d6c80-kube-api-access-55hnp") pod "ovnkube-node-24gvs" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80") : failed to sync configmap cache: timed out waiting for the condition Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.878469 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 00:26:59.409588934 +0000 UTC Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.914690 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.965814 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 11:22:26 crc kubenswrapper[4804]: I0128 11:22:26.966082 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.036020 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba"} Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.037760 4804 generic.go:334] "Generic (PLEG): container finished" podID="12825f11-ad6e-4db0-87b3-a619c0521c56" containerID="8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb" exitCode=0 Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.037822 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" event={"ID":"12825f11-ad6e-4db0-87b3-a619c0521c56","Type":"ContainerDied","Data":"8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb"} Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.040527 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c"} Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.040666 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5"} Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.052274 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.069824 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.077136 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.082876 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.097814 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.113596 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.136895 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.145499 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.161707 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.173390 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.185793 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.198710 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.199630 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.208832 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.221844 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.236613 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.250070 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.263007 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.277400 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.291853 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.304300 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.318049 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-script-lib\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.318101 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-config\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.318141 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-55hnp\" (UniqueName: \"kubernetes.io/projected/686039c6-ae16-45ac-bb9f-4c39d57d6c80-kube-api-access-55hnp\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.318163 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-daemon-config\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.318178 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovn-node-metrics-cert\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.318195 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-env-overrides\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.318793 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/735b7edc-6f8b-4f5f-a9ca-11964dd78266-multus-daemon-config\") pod \"multus-lqqmt\" (UID: \"735b7edc-6f8b-4f5f-a9ca-11964dd78266\") " pod="openshift-multus/multus-lqqmt" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.319412 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-script-lib\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.319452 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-env-overrides\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.319475 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-config\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.323809 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-55hnp\" (UniqueName: \"kubernetes.io/projected/686039c6-ae16-45ac-bb9f-4c39d57d6c80-kube-api-access-55hnp\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.323910 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovn-node-metrics-cert\") pod \"ovnkube-node-24gvs\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.324973 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.336980 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.348513 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.360198 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.392016 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.429650 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.467116 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-lqqmt" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.472464 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.473628 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:27 crc kubenswrapper[4804]: W0128 11:22:27.483156 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod735b7edc_6f8b_4f5f_a9ca_11964dd78266.slice/crio-2f82f936abbd7c4f58d8989baed93b4c736e6378021771d6049fd15b0ed1b07e WatchSource:0}: Error finding container 2f82f936abbd7c4f58d8989baed93b4c736e6378021771d6049fd15b0ed1b07e: Status 404 returned error can't find the container with id 2f82f936abbd7c4f58d8989baed93b4c736e6378021771d6049fd15b0ed1b07e Jan 28 11:22:27 crc kubenswrapper[4804]: W0128 11:22:27.487867 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod686039c6_ae16_45ac_bb9f_4c39d57d6c80.slice/crio-008989ec311365ac3135e782553f5de3886fb749e9f1bd87d34281455159c3df WatchSource:0}: Error finding container 008989ec311365ac3135e782553f5de3886fb749e9f1bd87d34281455159c3df: Status 404 returned error can't find the container with id 008989ec311365ac3135e782553f5de3886fb749e9f1bd87d34281455159c3df Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.516599 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.554829 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.589256 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:27Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.621712 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.621844 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.621919 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:22:31.621893023 +0000 UTC m=+27.416773007 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.621999 4804 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.622084 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:31.622067117 +0000 UTC m=+27.416947101 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.722584 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.722634 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.722669 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.722812 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.722807 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.722850 4804 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.722951 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:31.722933851 +0000 UTC m=+27.517813835 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.722859 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.722993 4804 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.723040 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:31.723027684 +0000 UTC m=+27.517907668 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.722828 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.723062 4804 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.723080 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:31.723074555 +0000 UTC m=+27.517954539 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.878986 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 04:47:19.96014038 +0000 UTC Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.914977 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.915002 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:27 crc kubenswrapper[4804]: I0128 11:22:27.914994 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.915104 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.915255 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:27 crc kubenswrapper[4804]: E0128 11:22:27.915355 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.045454 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03" exitCode=0 Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.045531 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03"} Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.045847 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"008989ec311365ac3135e782553f5de3886fb749e9f1bd87d34281455159c3df"} Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.047664 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" event={"ID":"12825f11-ad6e-4db0-87b3-a619c0521c56","Type":"ContainerStarted","Data":"a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489"} Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.049196 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqqmt" event={"ID":"735b7edc-6f8b-4f5f-a9ca-11964dd78266","Type":"ContainerStarted","Data":"938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7"} Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.049243 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqqmt" event={"ID":"735b7edc-6f8b-4f5f-a9ca-11964dd78266","Type":"ContainerStarted","Data":"2f82f936abbd7c4f58d8989baed93b4c736e6378021771d6049fd15b0ed1b07e"} Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.063846 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.076196 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.088079 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.098494 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.111480 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.133624 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.178196 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.201983 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.214562 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.226993 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.237746 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.250063 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.260795 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.273476 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.290126 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.303218 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.315389 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.334620 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.348578 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.392319 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.430375 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.470372 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.513566 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.549261 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.589499 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.629492 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.671745 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.715600 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:28Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:28 crc kubenswrapper[4804]: I0128 11:22:28.879803 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 16:28:12.740816074 +0000 UTC Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.054582 4804 generic.go:334] "Generic (PLEG): container finished" podID="12825f11-ad6e-4db0-87b3-a619c0521c56" containerID="a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489" exitCode=0 Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.054647 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" event={"ID":"12825f11-ad6e-4db0-87b3-a619c0521c56","Type":"ContainerDied","Data":"a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489"} Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.059387 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18"} Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.059442 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897"} Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.059457 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c"} Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.059468 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e"} Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.059479 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc"} Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.059499 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6"} Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.071074 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.080366 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.102371 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.114790 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.126106 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.139786 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.152547 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.164309 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.179998 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.200196 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.212502 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.223635 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.244522 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.271627 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:29Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.880398 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 06:09:42.356999931 +0000 UTC Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.913959 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.914048 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:29 crc kubenswrapper[4804]: E0128 11:22:29.914105 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:29 crc kubenswrapper[4804]: E0128 11:22:29.914171 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:29 crc kubenswrapper[4804]: I0128 11:22:29.914058 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:29 crc kubenswrapper[4804]: E0128 11:22:29.914381 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.066369 4804 generic.go:334] "Generic (PLEG): container finished" podID="12825f11-ad6e-4db0-87b3-a619c0521c56" containerID="b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326" exitCode=0 Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.066421 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" event={"ID":"12825f11-ad6e-4db0-87b3-a619c0521c56","Type":"ContainerDied","Data":"b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326"} Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.088791 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.113140 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.133988 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.148849 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.164651 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.177616 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.190089 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.204013 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.222096 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.236719 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.248377 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.271152 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.285829 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.298460 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.365727 4804 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.367672 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.367818 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.367941 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.368147 4804 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.374100 4804 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.374340 4804 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.375327 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.375450 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.375542 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.375615 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.375694 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:30Z","lastTransitionTime":"2026-01-28T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:30 crc kubenswrapper[4804]: E0128 11:22:30.388688 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.392269 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.392296 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.392304 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.392317 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.392326 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:30Z","lastTransitionTime":"2026-01-28T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:30 crc kubenswrapper[4804]: E0128 11:22:30.407567 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.411107 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.411243 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.411331 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.411408 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.411479 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:30Z","lastTransitionTime":"2026-01-28T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:30 crc kubenswrapper[4804]: E0128 11:22:30.421791 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.424610 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.424691 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.424710 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.424731 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.424747 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:30Z","lastTransitionTime":"2026-01-28T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:30 crc kubenswrapper[4804]: E0128 11:22:30.436279 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.439565 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.439596 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.439608 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.439624 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.439634 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:30Z","lastTransitionTime":"2026-01-28T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:30 crc kubenswrapper[4804]: E0128 11:22:30.449676 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:30Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:30 crc kubenswrapper[4804]: E0128 11:22:30.449793 4804 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.451010 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.451037 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.451045 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.451058 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.451069 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:30Z","lastTransitionTime":"2026-01-28T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.553169 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.553389 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.553499 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.553596 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.553688 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:30Z","lastTransitionTime":"2026-01-28T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.656164 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.656197 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.656206 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.656221 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.656237 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:30Z","lastTransitionTime":"2026-01-28T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.759345 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.759412 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.759421 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.759441 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.759455 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:30Z","lastTransitionTime":"2026-01-28T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.862132 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.862176 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.862188 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.862204 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.862216 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:30Z","lastTransitionTime":"2026-01-28T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.880832 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 17:46:11.17471972 +0000 UTC Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.964712 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.964750 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.964758 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.964775 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:30 crc kubenswrapper[4804]: I0128 11:22:30.964787 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:30Z","lastTransitionTime":"2026-01-28T11:22:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.067373 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.067417 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.067434 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.067450 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.067460 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:31Z","lastTransitionTime":"2026-01-28T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.070898 4804 generic.go:334] "Generic (PLEG): container finished" podID="12825f11-ad6e-4db0-87b3-a619c0521c56" containerID="00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0" exitCode=0 Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.070979 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" event={"ID":"12825f11-ad6e-4db0-87b3-a619c0521c56","Type":"ContainerDied","Data":"00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.074437 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.090237 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.103766 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.117853 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.132472 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.147388 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.164426 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.169592 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.169624 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.169634 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.169654 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.169665 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:31Z","lastTransitionTime":"2026-01-28T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.178016 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.199576 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.211759 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.223436 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.234180 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.245779 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.263812 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.272004 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.272050 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.272064 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.272080 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.272097 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:31Z","lastTransitionTime":"2026-01-28T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.277355 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.374639 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.374671 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.374681 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.374694 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.374704 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:31Z","lastTransitionTime":"2026-01-28T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.477517 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.477561 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.477574 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.477592 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.477603 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:31Z","lastTransitionTime":"2026-01-28T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.580026 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.580068 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.580081 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.580097 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.580109 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:31Z","lastTransitionTime":"2026-01-28T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.581672 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-v88kz"] Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.582031 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-v88kz" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.585046 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.585316 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.585363 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.587124 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.601919 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.617899 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.649784 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.664693 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.664788 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.664813 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hv4wb\" (UniqueName: \"kubernetes.io/projected/28d27942-1d0e-4433-a349-e1a404557705-kube-api-access-hv4wb\") pod \"node-ca-v88kz\" (UID: \"28d27942-1d0e-4433-a349-e1a404557705\") " pod="openshift-image-registry/node-ca-v88kz" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.664851 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/28d27942-1d0e-4433-a349-e1a404557705-serviceca\") pod \"node-ca-v88kz\" (UID: \"28d27942-1d0e-4433-a349-e1a404557705\") " pod="openshift-image-registry/node-ca-v88kz" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.664916 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28d27942-1d0e-4433-a349-e1a404557705-host\") pod \"node-ca-v88kz\" (UID: \"28d27942-1d0e-4433-a349-e1a404557705\") " pod="openshift-image-registry/node-ca-v88kz" Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.665008 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:22:39.664995426 +0000 UTC m=+35.459875410 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.665053 4804 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.665083 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:39.665077789 +0000 UTC m=+35.459957773 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.668162 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.682899 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.682932 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.682940 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.682952 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.682961 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:31Z","lastTransitionTime":"2026-01-28T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.685347 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.702128 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.717752 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.734739 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.747597 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.764648 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.765376 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/28d27942-1d0e-4433-a349-e1a404557705-serviceca\") pod \"node-ca-v88kz\" (UID: \"28d27942-1d0e-4433-a349-e1a404557705\") " pod="openshift-image-registry/node-ca-v88kz" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.765420 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.765446 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28d27942-1d0e-4433-a349-e1a404557705-host\") pod \"node-ca-v88kz\" (UID: \"28d27942-1d0e-4433-a349-e1a404557705\") " pod="openshift-image-registry/node-ca-v88kz" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.765476 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.765511 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hv4wb\" (UniqueName: \"kubernetes.io/projected/28d27942-1d0e-4433-a349-e1a404557705-kube-api-access-hv4wb\") pod \"node-ca-v88kz\" (UID: \"28d27942-1d0e-4433-a349-e1a404557705\") " pod="openshift-image-registry/node-ca-v88kz" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.765540 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.765549 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/28d27942-1d0e-4433-a349-e1a404557705-host\") pod \"node-ca-v88kz\" (UID: \"28d27942-1d0e-4433-a349-e1a404557705\") " pod="openshift-image-registry/node-ca-v88kz" Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.765599 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.765630 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.765643 4804 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.765666 4804 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.765708 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:39.765675144 +0000 UTC m=+35.560555128 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.765730 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:39.765722976 +0000 UTC m=+35.560602950 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.765667 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.765748 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.765757 4804 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.765790 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:39.765784428 +0000 UTC m=+35.560664412 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.766538 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/28d27942-1d0e-4433-a349-e1a404557705-serviceca\") pod \"node-ca-v88kz\" (UID: \"28d27942-1d0e-4433-a349-e1a404557705\") " pod="openshift-image-registry/node-ca-v88kz" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.778146 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.785321 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.785738 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.785752 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.785774 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.785787 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:31Z","lastTransitionTime":"2026-01-28T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.789957 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hv4wb\" (UniqueName: \"kubernetes.io/projected/28d27942-1d0e-4433-a349-e1a404557705-kube-api-access-hv4wb\") pod \"node-ca-v88kz\" (UID: \"28d27942-1d0e-4433-a349-e1a404557705\") " pod="openshift-image-registry/node-ca-v88kz" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.797054 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.810126 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.821861 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.838266 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:31Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.881109 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 01:29:01.117930055 +0000 UTC Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.889137 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.889170 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.889179 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.889192 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.889200 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:31Z","lastTransitionTime":"2026-01-28T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.897319 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-v88kz" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.914295 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.914332 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.914370 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.914426 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.914518 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:31 crc kubenswrapper[4804]: E0128 11:22:31.914592 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:31 crc kubenswrapper[4804]: W0128 11:22:31.991321 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod28d27942_1d0e_4433_a349_e1a404557705.slice/crio-37cd13ba58c9144929bfce380e4b0493b4ca4690173dfe4ce57fb7d72f496740 WatchSource:0}: Error finding container 37cd13ba58c9144929bfce380e4b0493b4ca4690173dfe4ce57fb7d72f496740: Status 404 returned error can't find the container with id 37cd13ba58c9144929bfce380e4b0493b4ca4690173dfe4ce57fb7d72f496740 Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.991611 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.991639 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.991650 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.991663 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:31 crc kubenswrapper[4804]: I0128 11:22:31.991672 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:31Z","lastTransitionTime":"2026-01-28T11:22:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.080726 4804 generic.go:334] "Generic (PLEG): container finished" podID="12825f11-ad6e-4db0-87b3-a619c0521c56" containerID="bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8" exitCode=0 Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.080791 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" event={"ID":"12825f11-ad6e-4db0-87b3-a619c0521c56","Type":"ContainerDied","Data":"bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8"} Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.081824 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-v88kz" event={"ID":"28d27942-1d0e-4433-a349-e1a404557705","Type":"ContainerStarted","Data":"37cd13ba58c9144929bfce380e4b0493b4ca4690173dfe4ce57fb7d72f496740"} Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.094221 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.094250 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.094260 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.094275 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.094285 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:32Z","lastTransitionTime":"2026-01-28T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.101002 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.112464 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.124360 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.141213 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.152489 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.166330 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.178682 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.190200 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.196282 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.196326 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.196336 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.196352 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.196365 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:32Z","lastTransitionTime":"2026-01-28T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.202247 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.214356 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.225320 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.236871 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.252038 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.272682 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.282360 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:32Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.299731 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.299767 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.299775 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.299789 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.299800 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:32Z","lastTransitionTime":"2026-01-28T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.335688 4804 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.402008 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.402043 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.402051 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.402065 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.402075 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:32Z","lastTransitionTime":"2026-01-28T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.504610 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.504674 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.504687 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.504703 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.504713 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:32Z","lastTransitionTime":"2026-01-28T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.607092 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.607145 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.607156 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.607174 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.607189 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:32Z","lastTransitionTime":"2026-01-28T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.661573 4804 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.710009 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.710051 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.710063 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.710083 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.710096 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:32Z","lastTransitionTime":"2026-01-28T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.813539 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.813575 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.813583 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.813597 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.813607 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:32Z","lastTransitionTime":"2026-01-28T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.881668 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 04:35:28.47782565 +0000 UTC Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.916607 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.917045 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.917061 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.917078 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:32 crc kubenswrapper[4804]: I0128 11:22:32.917089 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:32Z","lastTransitionTime":"2026-01-28T11:22:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.020224 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.020275 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.020292 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.020315 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.020331 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:33Z","lastTransitionTime":"2026-01-28T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.088494 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-v88kz" event={"ID":"28d27942-1d0e-4433-a349-e1a404557705","Type":"ContainerStarted","Data":"fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.093266 4804 generic.go:334] "Generic (PLEG): container finished" podID="12825f11-ad6e-4db0-87b3-a619c0521c56" containerID="f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108" exitCode=0 Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.093326 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" event={"ID":"12825f11-ad6e-4db0-87b3-a619c0521c56","Type":"ContainerDied","Data":"f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.104072 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.118396 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.122149 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.122191 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.122201 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.122219 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.122235 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:33Z","lastTransitionTime":"2026-01-28T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.128013 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.151389 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.165433 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.178320 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.197205 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.209862 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.223192 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.224952 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.224987 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.225002 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.225052 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.225066 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:33Z","lastTransitionTime":"2026-01-28T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.237536 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.257760 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.269140 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.282023 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.295452 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.306932 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.322519 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.327479 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.327517 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.327525 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.327542 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.327552 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:33Z","lastTransitionTime":"2026-01-28T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.335458 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.345588 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.355611 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.369635 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.385917 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.397773 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.415742 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.429174 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.429875 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.429927 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.429937 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.429956 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.429968 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:33Z","lastTransitionTime":"2026-01-28T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.441574 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.456209 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.467497 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.479359 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.491318 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.504977 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:33Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.532534 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.532569 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.532580 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.532599 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.532611 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:33Z","lastTransitionTime":"2026-01-28T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.635099 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.635132 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.635144 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.635160 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.635187 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:33Z","lastTransitionTime":"2026-01-28T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.737438 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.737472 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.737480 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.737493 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.737503 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:33Z","lastTransitionTime":"2026-01-28T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.840218 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.840249 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.840259 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.840274 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.840284 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:33Z","lastTransitionTime":"2026-01-28T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.882124 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 00:28:33.225221009 +0000 UTC Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.914560 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.914597 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:33 crc kubenswrapper[4804]: E0128 11:22:33.914672 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.914745 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:33 crc kubenswrapper[4804]: E0128 11:22:33.914855 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:33 crc kubenswrapper[4804]: E0128 11:22:33.914989 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.942665 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.942699 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.942710 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.942726 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:33 crc kubenswrapper[4804]: I0128 11:22:33.942737 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:33Z","lastTransitionTime":"2026-01-28T11:22:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.045913 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.045952 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.045961 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.045974 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.045984 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:34Z","lastTransitionTime":"2026-01-28T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.101812 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.102230 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.108291 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" event={"ID":"12825f11-ad6e-4db0-87b3-a619c0521c56","Type":"ContainerStarted","Data":"6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.123272 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.132278 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.144984 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.149767 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.149805 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.149817 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.149833 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.149846 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:34Z","lastTransitionTime":"2026-01-28T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.157222 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.168401 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.178010 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.191800 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.203461 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.214066 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.223813 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.240836 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.252118 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.252166 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.252177 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.252194 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.252205 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:34Z","lastTransitionTime":"2026-01-28T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.254135 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.266525 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.277844 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.289329 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.301844 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.314135 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.315713 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.325497 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.335064 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.344861 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.354757 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.354801 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.354813 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.354828 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.354840 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:34Z","lastTransitionTime":"2026-01-28T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.358033 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.376068 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.386306 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.403988 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.417001 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.432363 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.444275 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.453621 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.456904 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.456947 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.456984 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.457002 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.457015 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:34Z","lastTransitionTime":"2026-01-28T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.465079 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.478536 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.491606 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.503628 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.513873 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.526197 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.551418 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.559844 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.559894 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.559907 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.559923 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.559933 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:34Z","lastTransitionTime":"2026-01-28T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.561581 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.572095 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.583354 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.598035 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.610215 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.624223 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.658661 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.662709 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.662746 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.662757 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.662771 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.662780 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:34Z","lastTransitionTime":"2026-01-28T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.687685 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.706396 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.732191 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.765265 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.765305 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.765315 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.765330 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.765341 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:34Z","lastTransitionTime":"2026-01-28T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.771324 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.867945 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.867997 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.868005 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.868019 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.868028 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:34Z","lastTransitionTime":"2026-01-28T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.883126 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 19:18:54.945636915 +0000 UTC Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.931338 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.944754 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.964462 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.970075 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.970108 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.970119 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.970139 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.970149 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:34Z","lastTransitionTime":"2026-01-28T11:22:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:34 crc kubenswrapper[4804]: I0128 11:22:34.996483 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [nbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:34Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.009864 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.025617 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.050607 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.072056 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.072118 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.072133 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.072155 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.072167 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:35Z","lastTransitionTime":"2026-01-28T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.090824 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.111201 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.111551 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.130215 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.169231 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.173853 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.174583 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.174609 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.174621 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.174636 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.174648 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:35Z","lastTransitionTime":"2026-01-28T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.210398 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.263312 4804 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.263394 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.278273 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.278345 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.278363 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.278409 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.278433 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:35Z","lastTransitionTime":"2026-01-28T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.314712 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.356985 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.381302 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.381357 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.381371 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.381405 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.381426 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:35Z","lastTransitionTime":"2026-01-28T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.394711 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.441111 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.474868 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.484051 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.484092 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.484103 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.484122 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.484134 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:35Z","lastTransitionTime":"2026-01-28T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.511810 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.555134 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.586142 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.586196 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.586209 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.586228 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.586241 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:35Z","lastTransitionTime":"2026-01-28T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.589663 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.631220 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.670934 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.688320 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.688354 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.688362 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.688375 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.688384 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:35Z","lastTransitionTime":"2026-01-28T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.709340 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.751206 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.791168 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.791204 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.791220 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.791242 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.791255 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:35Z","lastTransitionTime":"2026-01-28T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.791413 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.828260 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.874858 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.883601 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 02:32:28.675666613 +0000 UTC Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.895193 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.895257 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.895271 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.895306 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.895323 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:35Z","lastTransitionTime":"2026-01-28T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.914742 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:35 crc kubenswrapper[4804]: E0128 11:22:35.914955 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.915200 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.915426 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.915446 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:35 crc kubenswrapper[4804]: E0128 11:22:35.915507 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:35 crc kubenswrapper[4804]: E0128 11:22:35.915634 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.952463 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.990977 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:35Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.997659 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.997710 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.997722 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.997741 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:35 crc kubenswrapper[4804]: I0128 11:22:35.997752 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:35Z","lastTransitionTime":"2026-01-28T11:22:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.100984 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.101061 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.101077 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.101102 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.101119 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:36Z","lastTransitionTime":"2026-01-28T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.116166 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/0.log" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.123642 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19" exitCode=1 Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.123690 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19"} Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.124512 4804 scope.go:117] "RemoveContainer" containerID="fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.143556 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.168848 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:36Z\\\",\\\"message\\\":\\\"ressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:36.013676 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:36.013735 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 11:22:36.013744 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:36.013750 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:36.013756 6108 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 11:22:36.013761 6108 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:36.013767 6108 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 11:22:36.014012 6108 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:36.014861 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:36.014910 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 11:22:36.014944 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:36.014971 6108 factory.go:656] Stopping watch factory\\\\nI0128 11:22:36.014986 6108 ovnkube.go:599] Stopped ovnkube\\\\nI0128 11:22:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.182097 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.202845 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.203487 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.203535 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.203549 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.203572 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.203586 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:36Z","lastTransitionTime":"2026-01-28T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.217310 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.229784 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.277273 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.306412 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.306465 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.306481 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.306502 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.306516 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:36Z","lastTransitionTime":"2026-01-28T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.311681 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.351830 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.401270 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.409256 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.409316 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.409331 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.409354 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.409368 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:36Z","lastTransitionTime":"2026-01-28T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.440593 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.471739 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.511445 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.511487 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.511496 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.511512 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.511522 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:36Z","lastTransitionTime":"2026-01-28T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.513418 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.550128 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.594931 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:36Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.613599 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.613650 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.613663 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.613680 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.613694 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:36Z","lastTransitionTime":"2026-01-28T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.716950 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.716996 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.717009 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.717029 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.717041 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:36Z","lastTransitionTime":"2026-01-28T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.819244 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.819305 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.819316 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.819340 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.819362 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:36Z","lastTransitionTime":"2026-01-28T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.884402 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 14:31:46.453579579 +0000 UTC Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.926735 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.926793 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.926825 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.926938 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:36 crc kubenswrapper[4804]: I0128 11:22:36.927015 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:36Z","lastTransitionTime":"2026-01-28T11:22:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.030301 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.030350 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.030363 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.030385 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.030401 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:37Z","lastTransitionTime":"2026-01-28T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.131060 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/0.log" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.132765 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.132806 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.132818 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.132839 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.132851 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:37Z","lastTransitionTime":"2026-01-28T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.134434 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700"} Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.134542 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.150847 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.169456 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.192133 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:36Z\\\",\\\"message\\\":\\\"ressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:36.013676 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:36.013735 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 11:22:36.013744 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:36.013750 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:36.013756 6108 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 11:22:36.013761 6108 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:36.013767 6108 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 11:22:36.014012 6108 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:36.014861 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:36.014910 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 11:22:36.014944 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:36.014971 6108 factory.go:656] Stopping watch factory\\\\nI0128 11:22:36.014986 6108 ovnkube.go:599] Stopped ovnkube\\\\nI0128 11:22:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.206342 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.220770 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.235300 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.235338 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.235349 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.235366 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.235378 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:37Z","lastTransitionTime":"2026-01-28T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.237229 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.253614 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.268746 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.280337 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.292461 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.313578 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.328723 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.338264 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.338313 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.338325 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.338346 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.338361 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:37Z","lastTransitionTime":"2026-01-28T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.341893 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.354012 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.366502 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.440985 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.441027 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.441039 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.441055 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.441068 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:37Z","lastTransitionTime":"2026-01-28T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.480906 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj"] Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.481344 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.483640 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.484080 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.497433 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.516648 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.531610 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.543363 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.543403 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.543413 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.543432 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.543442 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:37Z","lastTransitionTime":"2026-01-28T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.544621 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.556553 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.570451 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.583092 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.604790 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:36Z\\\",\\\"message\\\":\\\"ressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:36.013676 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:36.013735 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 11:22:36.013744 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:36.013750 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:36.013756 6108 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 11:22:36.013761 6108 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:36.013767 6108 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 11:22:36.014012 6108 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:36.014861 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:36.014910 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 11:22:36.014944 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:36.014971 6108 factory.go:656] Stopping watch factory\\\\nI0128 11:22:36.014986 6108 ovnkube.go:599] Stopped ovnkube\\\\nI0128 11:22:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.620212 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.635015 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.648976 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.649740 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.649870 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.650009 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.650098 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:37Z","lastTransitionTime":"2026-01-28T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.653371 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57lrq\" (UniqueName: \"kubernetes.io/projected/34e3d03d-371f-46d2-946a-6156c9570604-kube-api-access-57lrq\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.653493 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34e3d03d-371f-46d2-946a-6156c9570604-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.653546 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34e3d03d-371f-46d2-946a-6156c9570604-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.653583 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34e3d03d-371f-46d2-946a-6156c9570604-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.673117 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.712941 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.753653 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.753939 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.753985 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.754002 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.754027 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.754046 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:37Z","lastTransitionTime":"2026-01-28T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.754228 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57lrq\" (UniqueName: \"kubernetes.io/projected/34e3d03d-371f-46d2-946a-6156c9570604-kube-api-access-57lrq\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.754319 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34e3d03d-371f-46d2-946a-6156c9570604-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.754342 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34e3d03d-371f-46d2-946a-6156c9570604-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.754391 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34e3d03d-371f-46d2-946a-6156c9570604-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.755192 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/34e3d03d-371f-46d2-946a-6156c9570604-env-overrides\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.755474 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/34e3d03d-371f-46d2-946a-6156c9570604-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.763250 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/34e3d03d-371f-46d2-946a-6156c9570604-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.803203 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57lrq\" (UniqueName: \"kubernetes.io/projected/34e3d03d-371f-46d2-946a-6156c9570604-kube-api-access-57lrq\") pod \"ovnkube-control-plane-749d76644c-5jdhj\" (UID: \"34e3d03d-371f-46d2-946a-6156c9570604\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.810863 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.852695 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.857746 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.857799 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.857818 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.857846 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.857867 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:37Z","lastTransitionTime":"2026-01-28T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.884992 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 16:55:52.752987047 +0000 UTC Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.910951 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:37Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.915185 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.915286 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:37 crc kubenswrapper[4804]: E0128 11:22:37.915402 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:37 crc kubenswrapper[4804]: E0128 11:22:37.915600 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.915759 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:37 crc kubenswrapper[4804]: E0128 11:22:37.915931 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.961517 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.961626 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.961647 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.961673 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:37 crc kubenswrapper[4804]: I0128 11:22:37.961702 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:37Z","lastTransitionTime":"2026-01-28T11:22:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.065186 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.065239 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.065259 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.065284 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.065304 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:38Z","lastTransitionTime":"2026-01-28T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.101339 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" Jan 28 11:22:38 crc kubenswrapper[4804]: W0128 11:22:38.119049 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34e3d03d_371f_46d2_946a_6156c9570604.slice/crio-7762ec82e64b87ad22755bc0db597c67f7057f9ace1e2ece348a192f58ccbb21 WatchSource:0}: Error finding container 7762ec82e64b87ad22755bc0db597c67f7057f9ace1e2ece348a192f58ccbb21: Status 404 returned error can't find the container with id 7762ec82e64b87ad22755bc0db597c67f7057f9ace1e2ece348a192f58ccbb21 Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.141249 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/1.log" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.142031 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/0.log" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.145464 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700" exitCode=1 Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.145597 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.145733 4804 scope.go:117] "RemoveContainer" containerID="fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.146808 4804 scope.go:117] "RemoveContainer" containerID="d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700" Jan 28 11:22:38 crc kubenswrapper[4804]: E0128 11:22:38.147139 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\"" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.152396 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" event={"ID":"34e3d03d-371f-46d2-946a-6156c9570604","Type":"ContainerStarted","Data":"7762ec82e64b87ad22755bc0db597c67f7057f9ace1e2ece348a192f58ccbb21"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.167108 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.167782 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.167930 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.168065 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.168135 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.168212 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:38Z","lastTransitionTime":"2026-01-28T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.185514 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.212729 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.226948 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.242325 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.255651 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.270610 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.270672 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.270685 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.270711 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.270555 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.270726 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:38Z","lastTransitionTime":"2026-01-28T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.285705 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.301683 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.319839 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.331391 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.374360 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.374769 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.374783 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.374803 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.374817 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:38Z","lastTransitionTime":"2026-01-28T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.376723 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.411210 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.449897 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.477917 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.477984 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.477999 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.478023 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.478035 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:38Z","lastTransitionTime":"2026-01-28T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.499697 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.536587 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:36Z\\\",\\\"message\\\":\\\"ressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:36.013676 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:36.013735 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 11:22:36.013744 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:36.013750 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:36.013756 6108 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 11:22:36.013761 6108 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:36.013767 6108 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 11:22:36.014012 6108 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:36.014861 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:36.014910 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 11:22:36.014944 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:36.014971 6108 factory.go:656] Stopping watch factory\\\\nI0128 11:22:36.014986 6108 ovnkube.go:599] Stopped ovnkube\\\\nI0128 11:22:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"oval\\\\nI0128 11:22:37.458146 6245 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 11:22:37.458152 6245 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 11:22:37.458185 6245 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:37.458201 6245 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:37.458209 6245 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 11:22:37.458208 6245 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 11:22:37.458213 6245 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 11:22:37.458250 6245 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 11:22:37.458286 6245 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:37.458305 6245 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 11:22:37.458310 6245 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:37.458318 6245 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:37.458338 6245 factory.go:656] Stopping watch factory\\\\nI0128 11:22:37.458357 6245 ovnkube.go:599] Stopped ovnkube\\\\nI0128 11:22:37.458369 6245 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:37.458387 6245 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.581031 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.581081 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.581094 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.581117 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.581131 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:38Z","lastTransitionTime":"2026-01-28T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.683836 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.683874 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.683914 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.683933 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.683945 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:38Z","lastTransitionTime":"2026-01-28T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.787800 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.787868 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.787923 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.787958 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.787985 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:38Z","lastTransitionTime":"2026-01-28T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.885578 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 14:51:30.788851231 +0000 UTC Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.891219 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.891268 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.891278 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.891298 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.891308 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:38Z","lastTransitionTime":"2026-01-28T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.955764 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-bgqd8"] Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.956660 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:38 crc kubenswrapper[4804]: E0128 11:22:38.956785 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.978489 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.994171 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.994217 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.994231 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.994253 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:38 crc kubenswrapper[4804]: I0128 11:22:38.994268 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:38Z","lastTransitionTime":"2026-01-28T11:22:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.002566 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:38Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.022015 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.044196 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.068281 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd79940737647be08c06b53ff6df7737154a31bb7c2c9ffafa9f82765ca8be19\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:36Z\\\",\\\"message\\\":\\\"ressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:36.013676 6108 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:36.013735 6108 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 11:22:36.013744 6108 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:36.013750 6108 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:36.013756 6108 handler.go:208] Removed *v1.Node event handler 2\\\\nI0128 11:22:36.013761 6108 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:36.013767 6108 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 11:22:36.014012 6108 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:36.014861 6108 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:36.014910 6108 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 11:22:36.014944 6108 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:36.014971 6108 factory.go:656] Stopping watch factory\\\\nI0128 11:22:36.014986 6108 ovnkube.go:599] Stopped ovnkube\\\\nI0128 11:22:3\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"oval\\\\nI0128 11:22:37.458146 6245 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 11:22:37.458152 6245 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 11:22:37.458185 6245 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:37.458201 6245 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:37.458209 6245 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 11:22:37.458208 6245 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 11:22:37.458213 6245 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 11:22:37.458250 6245 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 11:22:37.458286 6245 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:37.458305 6245 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 11:22:37.458310 6245 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:37.458318 6245 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:37.458338 6245 factory.go:656] Stopping watch factory\\\\nI0128 11:22:37.458357 6245 ovnkube.go:599] Stopped ovnkube\\\\nI0128 11:22:37.458369 6245 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:37.458387 6245 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.071765 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxhvp\" (UniqueName: \"kubernetes.io/projected/03844e8b-8d66-4cd7-aa19-51caa1407918-kube-api-access-zxhvp\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.071842 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.081428 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.097207 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.097252 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.097261 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.097277 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.097291 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:39Z","lastTransitionTime":"2026-01-28T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.114391 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.136578 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.158518 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.158914 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" event={"ID":"34e3d03d-371f-46d2-946a-6156c9570604","Type":"ContainerStarted","Data":"022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59"} Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.159010 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" event={"ID":"34e3d03d-371f-46d2-946a-6156c9570604","Type":"ContainerStarted","Data":"0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b"} Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.161454 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/1.log" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.166445 4804 scope.go:117] "RemoveContainer" containerID="d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700" Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.166738 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\"" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.173373 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zxhvp\" (UniqueName: \"kubernetes.io/projected/03844e8b-8d66-4cd7-aa19-51caa1407918-kube-api-access-zxhvp\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.173412 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.173533 4804 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.173591 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs podName:03844e8b-8d66-4cd7-aa19-51caa1407918 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:39.673572631 +0000 UTC m=+35.468452625 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs") pod "network-metrics-daemon-bgqd8" (UID: "03844e8b-8d66-4cd7-aa19-51caa1407918") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.179582 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.195578 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.200803 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.200859 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.200918 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.200954 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.200976 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:39Z","lastTransitionTime":"2026-01-28T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.202108 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zxhvp\" (UniqueName: \"kubernetes.io/projected/03844e8b-8d66-4cd7-aa19-51caa1407918-kube-api-access-zxhvp\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.213392 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.236290 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.259074 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.279369 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.297575 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.304245 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.304294 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.304308 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.304329 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.304343 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:39Z","lastTransitionTime":"2026-01-28T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.314803 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.341549 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.358871 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.376136 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.397214 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.407553 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.407624 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.407637 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.407660 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.407675 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:39Z","lastTransitionTime":"2026-01-28T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.432089 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.470675 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.510939 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.510992 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.511004 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.511027 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.511039 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:39Z","lastTransitionTime":"2026-01-28T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.517180 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.553268 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.594694 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.613639 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.613708 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.613728 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.613755 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.613780 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:39Z","lastTransitionTime":"2026-01-28T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.635345 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.677092 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.680775 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.680965 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.681078 4804 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.681085 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:22:55.681036828 +0000 UTC m=+51.475916852 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.681140 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs podName:03844e8b-8d66-4cd7-aa19-51caa1407918 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:40.681121 +0000 UTC m=+36.476001004 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs") pod "network-metrics-daemon-bgqd8" (UID: "03844e8b-8d66-4cd7-aa19-51caa1407918") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.681248 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.681445 4804 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.681603 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:55.681572875 +0000 UTC m=+51.476452869 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.717401 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.717448 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.717459 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.717473 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.717484 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:39Z","lastTransitionTime":"2026-01-28T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.718039 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.753145 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.782258 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.782309 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.782350 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.782452 4804 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.782502 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:55.78248983 +0000 UTC m=+51.577369814 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.782582 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.782615 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.782635 4804 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.782692 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:55.782673796 +0000 UTC m=+51.577553810 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.782761 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.782841 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.782873 4804 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.783061 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:55.783017697 +0000 UTC m=+51.577897801 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.793650 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.819663 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.819722 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.819731 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.819744 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.819759 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:39Z","lastTransitionTime":"2026-01-28T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.835344 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.876869 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"oval\\\\nI0128 11:22:37.458146 6245 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 11:22:37.458152 6245 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 11:22:37.458185 6245 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:37.458201 6245 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:37.458209 6245 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 11:22:37.458208 6245 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 11:22:37.458213 6245 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 11:22:37.458250 6245 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 11:22:37.458286 6245 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:37.458305 6245 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 11:22:37.458310 6245 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:37.458318 6245 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:37.458338 6245 factory.go:656] Stopping watch factory\\\\nI0128 11:22:37.458357 6245 ovnkube.go:599] Stopped ovnkube\\\\nI0128 11:22:37.458369 6245 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:37.458387 6245 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.886124 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 11:18:42.818861418 +0000 UTC Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.913990 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:39Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.914138 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.914167 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.914249 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.914289 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.914440 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:39 crc kubenswrapper[4804]: E0128 11:22:39.914553 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.923404 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.923476 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.923496 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.923525 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:39 crc kubenswrapper[4804]: I0128 11:22:39.923546 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:39Z","lastTransitionTime":"2026-01-28T11:22:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.027122 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.027209 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.027228 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.027256 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.027274 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.131869 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.131952 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.131967 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.131993 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.132013 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.235376 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.235571 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.235669 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.235798 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.235920 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.339452 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.339506 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.339523 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.339551 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.339570 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.443605 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.443671 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.443696 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.443734 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.443759 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.547112 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.547237 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.547257 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.547294 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.547318 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.606080 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.606145 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.606162 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.606188 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.606201 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: E0128 11:22:40.625518 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.630098 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.630162 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.630181 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.630210 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.630228 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: E0128 11:22:40.648912 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.653743 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.653785 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.653802 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.653819 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.653832 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: E0128 11:22:40.668279 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.672330 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.672416 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.672435 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.672472 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.672491 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: E0128 11:22:40.690505 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.694018 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:40 crc kubenswrapper[4804]: E0128 11:22:40.694261 4804 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:40 crc kubenswrapper[4804]: E0128 11:22:40.694378 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs podName:03844e8b-8d66-4cd7-aa19-51caa1407918 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:42.694351591 +0000 UTC m=+38.489231665 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs") pod "network-metrics-daemon-bgqd8" (UID: "03844e8b-8d66-4cd7-aa19-51caa1407918") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.696117 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.696218 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.696245 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.696281 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.696310 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: E0128 11:22:40.716808 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:40Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:40Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:40 crc kubenswrapper[4804]: E0128 11:22:40.716969 4804 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.719365 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.719439 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.719460 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.719491 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.719510 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.823362 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.823404 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.823416 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.823437 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.823451 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.886560 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 06:05:22.704377053 +0000 UTC Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.914480 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:40 crc kubenswrapper[4804]: E0128 11:22:40.914646 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.927816 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.927861 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.927872 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.927902 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:40 crc kubenswrapper[4804]: I0128 11:22:40.927919 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:40Z","lastTransitionTime":"2026-01-28T11:22:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.030347 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.030401 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.030415 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.030434 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.030446 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:41Z","lastTransitionTime":"2026-01-28T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.133128 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.133172 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.133186 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.133202 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.133214 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:41Z","lastTransitionTime":"2026-01-28T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.235665 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.235715 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.235725 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.235740 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.235751 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:41Z","lastTransitionTime":"2026-01-28T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.338673 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.338733 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.338746 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.338766 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.338784 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:41Z","lastTransitionTime":"2026-01-28T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.442578 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.442621 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.442629 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.442645 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.442656 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:41Z","lastTransitionTime":"2026-01-28T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.545117 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.545168 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.545179 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.545196 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.545217 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:41Z","lastTransitionTime":"2026-01-28T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.648031 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.648068 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.648076 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.648092 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.648103 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:41Z","lastTransitionTime":"2026-01-28T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.750353 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.750399 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.750408 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.750423 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.750431 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:41Z","lastTransitionTime":"2026-01-28T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.854214 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.854268 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.854288 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.854314 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.854333 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:41Z","lastTransitionTime":"2026-01-28T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.886862 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 04:51:16.162559159 +0000 UTC Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.914716 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.914769 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:41 crc kubenswrapper[4804]: E0128 11:22:41.914998 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.915009 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:41 crc kubenswrapper[4804]: E0128 11:22:41.915285 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:41 crc kubenswrapper[4804]: E0128 11:22:41.915461 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.956462 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.956524 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.956536 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.956555 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:41 crc kubenswrapper[4804]: I0128 11:22:41.956569 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:41Z","lastTransitionTime":"2026-01-28T11:22:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.058938 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.059056 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.059077 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.059108 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.059133 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:42Z","lastTransitionTime":"2026-01-28T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.161873 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.161955 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.161971 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.161995 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.162012 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:42Z","lastTransitionTime":"2026-01-28T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.265665 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.265749 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.265784 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.265817 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.265842 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:42Z","lastTransitionTime":"2026-01-28T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.368911 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.368963 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.368974 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.368994 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.369009 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:42Z","lastTransitionTime":"2026-01-28T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.470973 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.471019 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.471032 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.471049 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.471061 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:42Z","lastTransitionTime":"2026-01-28T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.573582 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.573628 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.573643 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.573663 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.573679 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:42Z","lastTransitionTime":"2026-01-28T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.676348 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.676426 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.676452 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.676481 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.676503 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:42Z","lastTransitionTime":"2026-01-28T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.714558 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:42 crc kubenswrapper[4804]: E0128 11:22:42.714727 4804 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:42 crc kubenswrapper[4804]: E0128 11:22:42.714816 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs podName:03844e8b-8d66-4cd7-aa19-51caa1407918 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:46.714789553 +0000 UTC m=+42.509669577 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs") pod "network-metrics-daemon-bgqd8" (UID: "03844e8b-8d66-4cd7-aa19-51caa1407918") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.778542 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.778589 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.778603 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.778619 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.778631 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:42Z","lastTransitionTime":"2026-01-28T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.881387 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.881478 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.881512 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.881546 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.881568 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:42Z","lastTransitionTime":"2026-01-28T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.887660 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 17:04:11.617040892 +0000 UTC Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.914120 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:42 crc kubenswrapper[4804]: E0128 11:22:42.914337 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.984490 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.984523 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.984534 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.984549 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:42 crc kubenswrapper[4804]: I0128 11:22:42.984567 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:42Z","lastTransitionTime":"2026-01-28T11:22:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.087604 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.087653 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.087664 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.087682 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.087694 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:43Z","lastTransitionTime":"2026-01-28T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.190659 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.190713 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.190725 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.190743 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.190755 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:43Z","lastTransitionTime":"2026-01-28T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.294545 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.294611 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.294625 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.294651 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.294670 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:43Z","lastTransitionTime":"2026-01-28T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.397498 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.397546 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.397569 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.397586 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.397596 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:43Z","lastTransitionTime":"2026-01-28T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.500904 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.500951 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.500964 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.500984 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.500998 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:43Z","lastTransitionTime":"2026-01-28T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.603903 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.603951 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.603964 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.603981 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.603993 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:43Z","lastTransitionTime":"2026-01-28T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.706834 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.706943 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.706961 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.706992 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.707016 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:43Z","lastTransitionTime":"2026-01-28T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.809682 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.809746 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.809759 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.809776 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.809788 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:43Z","lastTransitionTime":"2026-01-28T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.888854 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 10:34:02.084862268 +0000 UTC Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.912215 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.912258 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.912266 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.912281 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.912291 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:43Z","lastTransitionTime":"2026-01-28T11:22:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.914458 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.914498 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:43 crc kubenswrapper[4804]: I0128 11:22:43.914495 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:43 crc kubenswrapper[4804]: E0128 11:22:43.914586 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:43 crc kubenswrapper[4804]: E0128 11:22:43.914694 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:43 crc kubenswrapper[4804]: E0128 11:22:43.914791 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.015356 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.015402 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.015414 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.015432 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.015444 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:44Z","lastTransitionTime":"2026-01-28T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.118475 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.118532 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.118542 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.118557 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.118570 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:44Z","lastTransitionTime":"2026-01-28T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.221294 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.221330 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.221339 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.221352 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.221361 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:44Z","lastTransitionTime":"2026-01-28T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.323738 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.323801 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.323817 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.323841 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.323857 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:44Z","lastTransitionTime":"2026-01-28T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.427256 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.427319 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.427332 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.427351 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.427364 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:44Z","lastTransitionTime":"2026-01-28T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.529370 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.529405 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.529413 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.529427 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.529436 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:44Z","lastTransitionTime":"2026-01-28T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.631955 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.631987 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.631998 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.632015 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.632027 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:44Z","lastTransitionTime":"2026-01-28T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.734642 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.734677 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.734686 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.734700 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.734711 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:44Z","lastTransitionTime":"2026-01-28T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.836337 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.836384 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.836395 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.836409 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.836418 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:44Z","lastTransitionTime":"2026-01-28T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.889090 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 09:19:59.712735107 +0000 UTC Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.914472 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:44 crc kubenswrapper[4804]: E0128 11:22:44.914592 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.928795 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:44Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.939245 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.939285 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.939294 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.939310 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.939320 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:44Z","lastTransitionTime":"2026-01-28T11:22:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.941019 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:44Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.952929 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:44Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.963997 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:44Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.975182 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:44Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:44 crc kubenswrapper[4804]: I0128 11:22:44.988234 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:44Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.005738 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.018178 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.033922 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.042049 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.042089 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.042099 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.042113 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.042124 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:45Z","lastTransitionTime":"2026-01-28T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.051598 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.072845 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"oval\\\\nI0128 11:22:37.458146 6245 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 11:22:37.458152 6245 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 11:22:37.458185 6245 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:37.458201 6245 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:37.458209 6245 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 11:22:37.458208 6245 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 11:22:37.458213 6245 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 11:22:37.458250 6245 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 11:22:37.458286 6245 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:37.458305 6245 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 11:22:37.458310 6245 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:37.458318 6245 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:37.458338 6245 factory.go:656] Stopping watch factory\\\\nI0128 11:22:37.458357 6245 ovnkube.go:599] Stopped ovnkube\\\\nI0128 11:22:37.458369 6245 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:37.458387 6245 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.084056 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.095785 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.112537 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.126987 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.141348 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.144985 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.145021 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.145030 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.145043 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.145053 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:45Z","lastTransitionTime":"2026-01-28T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.155935 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:45Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.247808 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.247853 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.247863 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.247892 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.247904 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:45Z","lastTransitionTime":"2026-01-28T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.350551 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.350598 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.350610 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.350626 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.350637 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:45Z","lastTransitionTime":"2026-01-28T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.453018 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.453101 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.453114 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.453133 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.453145 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:45Z","lastTransitionTime":"2026-01-28T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.555309 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.555352 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.555364 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.555378 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.555390 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:45Z","lastTransitionTime":"2026-01-28T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.658606 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.658677 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.658694 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.658719 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.658737 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:45Z","lastTransitionTime":"2026-01-28T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.761940 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.762031 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.762044 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.762057 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.762067 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:45Z","lastTransitionTime":"2026-01-28T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.864675 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.864716 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.864728 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.864744 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.864757 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:45Z","lastTransitionTime":"2026-01-28T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.890237 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 14:51:22.77431804 +0000 UTC Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.914821 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.915061 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.914915 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:45 crc kubenswrapper[4804]: E0128 11:22:45.915197 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:45 crc kubenswrapper[4804]: E0128 11:22:45.915361 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:45 crc kubenswrapper[4804]: E0128 11:22:45.915529 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.967809 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.967923 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.967960 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.967989 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:45 crc kubenswrapper[4804]: I0128 11:22:45.968013 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:45Z","lastTransitionTime":"2026-01-28T11:22:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.071351 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.071778 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.071822 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.071845 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.071859 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:46Z","lastTransitionTime":"2026-01-28T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.175045 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.175090 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.175100 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.175119 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.175130 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:46Z","lastTransitionTime":"2026-01-28T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.278370 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.278452 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.278474 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.278501 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.278519 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:46Z","lastTransitionTime":"2026-01-28T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.380734 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.380775 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.380786 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.380804 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.380813 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:46Z","lastTransitionTime":"2026-01-28T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.483602 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.483649 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.483658 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.483678 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.483690 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:46Z","lastTransitionTime":"2026-01-28T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.586483 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.586598 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.586629 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.586668 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.586701 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:46Z","lastTransitionTime":"2026-01-28T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.689538 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.689617 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.689637 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.689668 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.689692 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:46Z","lastTransitionTime":"2026-01-28T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.754419 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:46 crc kubenswrapper[4804]: E0128 11:22:46.754656 4804 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:46 crc kubenswrapper[4804]: E0128 11:22:46.754760 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs podName:03844e8b-8d66-4cd7-aa19-51caa1407918 nodeName:}" failed. No retries permitted until 2026-01-28 11:22:54.754731568 +0000 UTC m=+50.549611592 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs") pod "network-metrics-daemon-bgqd8" (UID: "03844e8b-8d66-4cd7-aa19-51caa1407918") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.792858 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.792930 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.792943 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.792960 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.792974 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:46Z","lastTransitionTime":"2026-01-28T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.890404 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 02:50:10.717008412 +0000 UTC Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.895965 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.896020 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.896029 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.896044 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.896054 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:46Z","lastTransitionTime":"2026-01-28T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.914427 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:46 crc kubenswrapper[4804]: E0128 11:22:46.914607 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.998722 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.998751 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.998759 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.998790 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:46 crc kubenswrapper[4804]: I0128 11:22:46.998801 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:46Z","lastTransitionTime":"2026-01-28T11:22:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.101799 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.101838 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.101848 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.101900 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.101920 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:47Z","lastTransitionTime":"2026-01-28T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.204548 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.204591 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.204601 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.204616 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.204626 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:47Z","lastTransitionTime":"2026-01-28T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.307497 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.307562 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.307604 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.307637 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.307663 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:47Z","lastTransitionTime":"2026-01-28T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.410130 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.410171 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.410185 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.410200 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.410209 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:47Z","lastTransitionTime":"2026-01-28T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.512850 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.512941 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.512954 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.512971 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.512982 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:47Z","lastTransitionTime":"2026-01-28T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.616309 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.616377 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.616389 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.616404 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.616414 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:47Z","lastTransitionTime":"2026-01-28T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.719310 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.719359 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.719372 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.719389 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.719403 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:47Z","lastTransitionTime":"2026-01-28T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.822187 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.822228 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.822240 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.822255 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.822267 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:47Z","lastTransitionTime":"2026-01-28T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.890656 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 19:09:14.89369712 +0000 UTC Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.914007 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.914054 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.914112 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:47 crc kubenswrapper[4804]: E0128 11:22:47.914162 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:47 crc kubenswrapper[4804]: E0128 11:22:47.914360 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:47 crc kubenswrapper[4804]: E0128 11:22:47.914454 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.925013 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.925045 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.925071 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.925085 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:47 crc kubenswrapper[4804]: I0128 11:22:47.925094 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:47Z","lastTransitionTime":"2026-01-28T11:22:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.027834 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.027903 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.027918 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.027936 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.027952 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:48Z","lastTransitionTime":"2026-01-28T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.131257 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.131302 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.131312 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.131336 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.131348 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:48Z","lastTransitionTime":"2026-01-28T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.239048 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.239109 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.239127 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.239147 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.239169 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:48Z","lastTransitionTime":"2026-01-28T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.342751 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.343270 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.343297 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.343328 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.343347 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:48Z","lastTransitionTime":"2026-01-28T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.445594 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.445640 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.445649 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.445665 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.445683 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:48Z","lastTransitionTime":"2026-01-28T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.548260 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.548300 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.548315 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.548339 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.548353 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:48Z","lastTransitionTime":"2026-01-28T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.651111 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.651156 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.651166 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.651181 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.651190 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:48Z","lastTransitionTime":"2026-01-28T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.753374 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.753418 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.753430 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.753447 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.753461 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:48Z","lastTransitionTime":"2026-01-28T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.856912 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.856963 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.856977 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.856996 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.857009 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:48Z","lastTransitionTime":"2026-01-28T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.891652 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 04:33:06.378644846 +0000 UTC Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.914019 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:48 crc kubenswrapper[4804]: E0128 11:22:48.914164 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.960517 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.960552 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.960563 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.960579 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:48 crc kubenswrapper[4804]: I0128 11:22:48.960589 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:48Z","lastTransitionTime":"2026-01-28T11:22:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.063986 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.064034 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.064044 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.064060 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.064071 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:49Z","lastTransitionTime":"2026-01-28T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.167000 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.167097 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.167124 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.167205 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.167238 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:49Z","lastTransitionTime":"2026-01-28T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.269947 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.269993 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.270004 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.270022 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.270036 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:49Z","lastTransitionTime":"2026-01-28T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.373515 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.373577 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.373589 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.373608 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.373620 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:49Z","lastTransitionTime":"2026-01-28T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.476128 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.476173 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.476189 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.476208 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.476220 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:49Z","lastTransitionTime":"2026-01-28T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.579324 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.579415 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.579427 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.579447 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.579464 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:49Z","lastTransitionTime":"2026-01-28T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.682609 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.682653 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.682665 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.682682 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.682694 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:49Z","lastTransitionTime":"2026-01-28T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.785355 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.785405 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.785426 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.785444 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.785456 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:49Z","lastTransitionTime":"2026-01-28T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.888106 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.888160 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.888175 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.888195 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.888211 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:49Z","lastTransitionTime":"2026-01-28T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.892690 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 18:27:52.130100626 +0000 UTC Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.914308 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.914353 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.914308 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:49 crc kubenswrapper[4804]: E0128 11:22:49.914487 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:49 crc kubenswrapper[4804]: E0128 11:22:49.914601 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:49 crc kubenswrapper[4804]: E0128 11:22:49.915006 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.915357 4804 scope.go:117] "RemoveContainer" containerID="d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.990895 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.990935 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.990947 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.990962 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:49 crc kubenswrapper[4804]: I0128 11:22:49.990982 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:49Z","lastTransitionTime":"2026-01-28T11:22:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.094042 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.094090 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.094103 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.094121 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.094133 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:50Z","lastTransitionTime":"2026-01-28T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.203438 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.203480 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.203493 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.203517 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.203526 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:50Z","lastTransitionTime":"2026-01-28T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.208215 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/1.log" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.212283 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177"} Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.212502 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.231076 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.244663 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.259664 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.273441 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.287865 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.306642 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.306688 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.306700 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.306720 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.306732 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:50Z","lastTransitionTime":"2026-01-28T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.308609 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.331739 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"oval\\\\nI0128 11:22:37.458146 6245 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 11:22:37.458152 6245 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 11:22:37.458185 6245 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:37.458201 6245 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:37.458209 6245 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 11:22:37.458208 6245 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 11:22:37.458213 6245 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 11:22:37.458250 6245 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 11:22:37.458286 6245 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:37.458305 6245 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 11:22:37.458310 6245 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:37.458318 6245 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:37.458338 6245 factory.go:656] Stopping watch factory\\\\nI0128 11:22:37.458357 6245 ovnkube.go:599] Stopped ovnkube\\\\nI0128 11:22:37.458369 6245 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:37.458387 6245 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.345427 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.370436 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.388604 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.404138 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.409089 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.409148 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.409159 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.409181 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.409194 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:50Z","lastTransitionTime":"2026-01-28T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.422248 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.438871 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.451273 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.463970 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.475546 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.492006 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:50Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.512502 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.512545 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.512559 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.512579 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.512591 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:50Z","lastTransitionTime":"2026-01-28T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.614965 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.615035 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.615047 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.615066 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.615078 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:50Z","lastTransitionTime":"2026-01-28T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.717431 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.717483 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.717496 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.717516 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.717528 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:50Z","lastTransitionTime":"2026-01-28T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.821471 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.821539 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.821555 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.821584 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.821601 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:50Z","lastTransitionTime":"2026-01-28T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.893059 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 02:13:29.916159951 +0000 UTC Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.914750 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:50 crc kubenswrapper[4804]: E0128 11:22:50.914980 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.925720 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.925750 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.925762 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.925782 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:50 crc kubenswrapper[4804]: I0128 11:22:50.925796 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:50Z","lastTransitionTime":"2026-01-28T11:22:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.028784 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.028834 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.028844 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.028861 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.028871 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.099522 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.099562 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.099571 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.099585 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.099594 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: E0128 11:22:51.118254 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.122556 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.122587 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.122597 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.122613 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.122631 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: E0128 11:22:51.136323 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.142089 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.142133 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.142143 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.142160 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.142175 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: E0128 11:22:51.159049 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.163153 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.163184 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.163193 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.163207 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.163218 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: E0128 11:22:51.182814 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.188785 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.188856 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.188870 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.188908 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.188921 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: E0128 11:22:51.200581 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:51Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: E0128 11:22:51.200714 4804 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.202308 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.202353 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.202366 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.202384 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.202400 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.217754 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/2.log" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.218572 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/1.log" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.221915 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177"} Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.221985 4804 scope.go:117] "RemoveContainer" containerID="d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.221907 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177" exitCode=1 Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.222958 4804 scope.go:117] "RemoveContainer" containerID="5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177" Jan 28 11:22:51 crc kubenswrapper[4804]: E0128 11:22:51.223159 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\"" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.241335 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.256116 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.267235 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.279402 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.298684 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.304391 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.304425 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.304434 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.304484 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.304514 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.311583 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.327901 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.340964 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.354646 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.368299 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.379344 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.392174 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.407352 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.407405 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.407415 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.407430 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.407439 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.409069 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d0a51e2aa2523cc7affc1cca338c81c8a0bcd0e78ad1244d9cdbab15bf2b2700\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"message\\\":\\\"oval\\\\nI0128 11:22:37.458146 6245 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0128 11:22:37.458152 6245 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0128 11:22:37.458185 6245 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:37.458201 6245 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:37.458209 6245 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0128 11:22:37.458208 6245 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0128 11:22:37.458213 6245 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0128 11:22:37.458250 6245 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0128 11:22:37.458286 6245 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:37.458305 6245 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0128 11:22:37.458310 6245 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:37.458318 6245 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:37.458338 6245 factory.go:656] Stopping watch factory\\\\nI0128 11:22:37.458357 6245 ovnkube.go:599] Stopped ovnkube\\\\nI0128 11:22:37.458369 6245 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:37.458387 6245 handler.go:208] Removed *v1.Node event handler 2\\\\nI01\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:36Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:50Z\\\",\\\"message\\\":\\\"gs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:50.819858 6458 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:50.820483 6458 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 11:22:50.821019 6458 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:50.821054 6458 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 11:22:50.821060 6458 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 11:22:50.821120 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:50.821153 6458 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:50.821163 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:50.821178 6458 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:50.821203 6458 factory.go:656] Stopping watch factory\\\\nI0128 11:22:50.821213 6458 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:50.821225 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:50.821236 6458 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.418553 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.431723 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.443048 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.453621 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:51Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.509565 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.509631 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.509643 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.509660 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.509669 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.612362 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.612622 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.612808 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.612940 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.613068 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.716689 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.716995 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.717035 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.717082 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.717094 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.820668 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.820712 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.820721 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.820739 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.820751 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.893808 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 20:44:34.113268741 +0000 UTC Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.914604 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.917004 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:51 crc kubenswrapper[4804]: E0128 11:22:51.917349 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.917451 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:51 crc kubenswrapper[4804]: E0128 11:22:51.917635 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:51 crc kubenswrapper[4804]: E0128 11:22:51.917865 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.923271 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.923676 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.925090 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.925226 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:51 crc kubenswrapper[4804]: I0128 11:22:51.925340 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:51Z","lastTransitionTime":"2026-01-28T11:22:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.028020 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.028095 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.028109 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.028131 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.028168 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:52Z","lastTransitionTime":"2026-01-28T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.130997 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.131051 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.131060 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.131077 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.131087 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:52Z","lastTransitionTime":"2026-01-28T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.229181 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/2.log" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.232874 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.232924 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.232935 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.232950 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.232959 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:52Z","lastTransitionTime":"2026-01-28T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.336781 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.336818 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.336830 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.336858 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.336867 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:52Z","lastTransitionTime":"2026-01-28T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.439360 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.439397 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.439407 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.439422 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.439441 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:52Z","lastTransitionTime":"2026-01-28T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.541875 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.541962 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.541982 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.542003 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.542015 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:52Z","lastTransitionTime":"2026-01-28T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.644659 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.644733 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.644745 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.644783 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.644794 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:52Z","lastTransitionTime":"2026-01-28T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.747400 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.747453 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.747479 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.747502 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.747519 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:52Z","lastTransitionTime":"2026-01-28T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.850223 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.850252 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.850260 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.850276 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.850286 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:52Z","lastTransitionTime":"2026-01-28T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.894075 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 04:08:57.046788882 +0000 UTC Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.915135 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:52 crc kubenswrapper[4804]: E0128 11:22:52.915265 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.924401 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.925246 4804 scope.go:117] "RemoveContainer" containerID="5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177" Jan 28 11:22:52 crc kubenswrapper[4804]: E0128 11:22:52.925471 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\"" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.939235 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:52Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.952810 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.952840 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.952848 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.952864 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.952872 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:52Z","lastTransitionTime":"2026-01-28T11:22:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.953441 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:52Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.967383 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:52Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.979558 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:52Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:52 crc kubenswrapper[4804]: I0128 11:22:52.991967 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:52Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.021631 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.035019 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.047432 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.054965 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.055017 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.055035 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.055054 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.055066 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:53Z","lastTransitionTime":"2026-01-28T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.061013 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.071804 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.086646 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.097012 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.109508 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.126810 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:50Z\\\",\\\"message\\\":\\\"gs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:50.819858 6458 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:50.820483 6458 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 11:22:50.821019 6458 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:50.821054 6458 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 11:22:50.821060 6458 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 11:22:50.821120 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:50.821153 6458 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:50.821163 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:50.821178 6458 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:50.821203 6458 factory.go:656] Stopping watch factory\\\\nI0128 11:22:50.821213 6458 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:50.821225 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:50.821236 6458 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.142350 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.153544 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.157324 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.157355 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.157366 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.157382 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.157394 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:53Z","lastTransitionTime":"2026-01-28T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.165292 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:53Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.259672 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.259711 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.259722 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.259738 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.259751 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:53Z","lastTransitionTime":"2026-01-28T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.362853 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.362944 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.362960 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.362982 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.362997 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:53Z","lastTransitionTime":"2026-01-28T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.465844 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.465943 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.465967 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.465995 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.466014 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:53Z","lastTransitionTime":"2026-01-28T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.568418 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.568474 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.568486 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.568504 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.568517 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:53Z","lastTransitionTime":"2026-01-28T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.670819 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.670862 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.670876 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.670925 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.670940 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:53Z","lastTransitionTime":"2026-01-28T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.773822 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.773866 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.773892 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.773909 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.773921 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:53Z","lastTransitionTime":"2026-01-28T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.876536 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.876576 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.876585 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.876599 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.876608 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:53Z","lastTransitionTime":"2026-01-28T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.894849 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 10:12:57.964194544 +0000 UTC Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.914464 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.914492 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.914516 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:53 crc kubenswrapper[4804]: E0128 11:22:53.914575 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:53 crc kubenswrapper[4804]: E0128 11:22:53.914722 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:53 crc kubenswrapper[4804]: E0128 11:22:53.914818 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.978996 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.979064 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.979074 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.979087 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:53 crc kubenswrapper[4804]: I0128 11:22:53.979098 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:53Z","lastTransitionTime":"2026-01-28T11:22:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.081498 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.081552 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.081564 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.081581 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.081592 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:54Z","lastTransitionTime":"2026-01-28T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.184046 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.184090 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.184102 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.184118 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.184127 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:54Z","lastTransitionTime":"2026-01-28T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.286990 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.287030 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.287040 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.287054 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.287066 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:54Z","lastTransitionTime":"2026-01-28T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.389783 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.389845 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.389854 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.389874 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.390042 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:54Z","lastTransitionTime":"2026-01-28T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.492654 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.493017 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.493121 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.493220 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.493309 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:54Z","lastTransitionTime":"2026-01-28T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.596339 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.596381 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.596394 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.596411 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.596424 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:54Z","lastTransitionTime":"2026-01-28T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.698851 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.698898 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.698908 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.698920 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.698929 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:54Z","lastTransitionTime":"2026-01-28T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.801116 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.801188 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.801204 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.801231 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.801251 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:54Z","lastTransitionTime":"2026-01-28T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.840001 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:54 crc kubenswrapper[4804]: E0128 11:22:54.840195 4804 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:54 crc kubenswrapper[4804]: E0128 11:22:54.840277 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs podName:03844e8b-8d66-4cd7-aa19-51caa1407918 nodeName:}" failed. No retries permitted until 2026-01-28 11:23:10.840248435 +0000 UTC m=+66.635128419 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs") pod "network-metrics-daemon-bgqd8" (UID: "03844e8b-8d66-4cd7-aa19-51caa1407918") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.895816 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 09:31:36.562016775 +0000 UTC Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.903674 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.903705 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.903714 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.903728 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.903737 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:54Z","lastTransitionTime":"2026-01-28T11:22:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.916731 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:54 crc kubenswrapper[4804]: E0128 11:22:54.916829 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.935355 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:54Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.954278 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:54Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:54 crc kubenswrapper[4804]: I0128 11:22:54.976797 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:54Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.001788 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:54Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.006643 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.006674 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.006688 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.006706 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.006719 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:55Z","lastTransitionTime":"2026-01-28T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.021757 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.050862 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:50Z\\\",\\\"message\\\":\\\"gs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:50.819858 6458 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:50.820483 6458 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 11:22:50.821019 6458 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:50.821054 6458 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 11:22:50.821060 6458 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 11:22:50.821120 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:50.821153 6458 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:50.821163 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:50.821178 6458 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:50.821203 6458 factory.go:656] Stopping watch factory\\\\nI0128 11:22:50.821213 6458 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:50.821225 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:50.821236 6458 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.061574 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.075268 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.095688 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.108085 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.108117 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.108127 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.108142 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.108152 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:55Z","lastTransitionTime":"2026-01-28T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.117141 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.131484 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.154210 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.171390 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.188816 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.203789 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.213227 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.213261 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.213272 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.213287 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.213298 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:55Z","lastTransitionTime":"2026-01-28T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.220421 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.234228 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:22:55Z is after 2025-08-24T17:21:41Z" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.316255 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.316292 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.316302 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.316322 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.316333 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:55Z","lastTransitionTime":"2026-01-28T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.419092 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.419134 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.419145 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.419163 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.419173 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:55Z","lastTransitionTime":"2026-01-28T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.522118 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.522188 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.522204 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.522229 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.522242 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:55Z","lastTransitionTime":"2026-01-28T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.625037 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.625088 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.625099 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.625117 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.625131 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:55Z","lastTransitionTime":"2026-01-28T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.727845 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.727924 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.727934 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.727962 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.727977 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:55Z","lastTransitionTime":"2026-01-28T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.750663 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.750777 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.750864 4804 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.750941 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:23:27.750923089 +0000 UTC m=+83.545803083 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.751120 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:23:27.751112075 +0000 UTC m=+83.545992059 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.830652 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.830934 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.831041 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.831115 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.831192 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:55Z","lastTransitionTime":"2026-01-28T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.851421 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.851724 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.851737 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.851851 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.851869 4804 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.851946 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 11:23:27.851927487 +0000 UTC m=+83.646807531 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.851806 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.851820 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.852047 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.852071 4804 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.852144 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 11:23:27.852125473 +0000 UTC m=+83.647005457 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.852317 4804 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.852439 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:23:27.852424443 +0000 UTC m=+83.647304417 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.896974 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 11:42:35.011588131 +0000 UTC Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.914500 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.914506 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.914636 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.914740 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.914829 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:55 crc kubenswrapper[4804]: E0128 11:22:55.915068 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.934156 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.934201 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.934212 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.934230 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:55 crc kubenswrapper[4804]: I0128 11:22:55.934242 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:55Z","lastTransitionTime":"2026-01-28T11:22:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.036953 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.036990 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.037001 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.037054 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.037071 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:56Z","lastTransitionTime":"2026-01-28T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.139804 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.140147 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.140245 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.140335 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.140446 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:56Z","lastTransitionTime":"2026-01-28T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.242796 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.242846 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.242861 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.242905 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.242918 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:56Z","lastTransitionTime":"2026-01-28T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.345257 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.345285 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.345296 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.345309 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.345318 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:56Z","lastTransitionTime":"2026-01-28T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.447234 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.447494 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.447565 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.447643 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.447713 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:56Z","lastTransitionTime":"2026-01-28T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.549584 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.549631 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.549645 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.549661 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.550010 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:56Z","lastTransitionTime":"2026-01-28T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.651903 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.651953 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.651966 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.651986 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.651998 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:56Z","lastTransitionTime":"2026-01-28T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.754779 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.755036 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.755129 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.755205 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.755265 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:56Z","lastTransitionTime":"2026-01-28T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.857441 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.857470 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.857479 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.857492 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.857502 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:56Z","lastTransitionTime":"2026-01-28T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.897871 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 07:34:59.990641386 +0000 UTC Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.914554 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:56 crc kubenswrapper[4804]: E0128 11:22:56.914689 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.960216 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.960485 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.960568 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.960686 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:56 crc kubenswrapper[4804]: I0128 11:22:56.960783 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:56Z","lastTransitionTime":"2026-01-28T11:22:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.063972 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.064023 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.064036 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.064054 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.064063 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:57Z","lastTransitionTime":"2026-01-28T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.167034 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.167079 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.167091 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.167109 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.167121 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:57Z","lastTransitionTime":"2026-01-28T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.269605 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.269637 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.269663 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.269676 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.269686 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:57Z","lastTransitionTime":"2026-01-28T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.373077 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.373176 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.373206 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.373255 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.373287 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:57Z","lastTransitionTime":"2026-01-28T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.476579 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.476630 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.476642 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.476666 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.476679 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:57Z","lastTransitionTime":"2026-01-28T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.579633 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.579732 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.579757 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.579792 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.579816 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:57Z","lastTransitionTime":"2026-01-28T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.684039 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.684453 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.684609 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.684749 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.684920 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:57Z","lastTransitionTime":"2026-01-28T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.788672 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.788711 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.788720 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.788735 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.788744 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:57Z","lastTransitionTime":"2026-01-28T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.892199 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.892531 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.892719 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.892861 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.893053 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:57Z","lastTransitionTime":"2026-01-28T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.898566 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 15:27:25.514112577 +0000 UTC Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.914902 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:57 crc kubenswrapper[4804]: E0128 11:22:57.915175 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.915058 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:57 crc kubenswrapper[4804]: E0128 11:22:57.915368 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.915032 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:57 crc kubenswrapper[4804]: E0128 11:22:57.915520 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.997604 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.997646 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.997657 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.997676 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:57 crc kubenswrapper[4804]: I0128 11:22:57.997690 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:57Z","lastTransitionTime":"2026-01-28T11:22:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.100433 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.100489 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.100501 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.100518 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.100530 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:58Z","lastTransitionTime":"2026-01-28T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.203942 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.204011 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.204033 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.204062 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.204084 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:58Z","lastTransitionTime":"2026-01-28T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.307807 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.307869 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.307911 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.307940 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.307953 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:58Z","lastTransitionTime":"2026-01-28T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.411767 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.411825 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.411838 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.411860 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.411877 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:58Z","lastTransitionTime":"2026-01-28T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.515267 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.515337 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.515358 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.515387 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.515408 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:58Z","lastTransitionTime":"2026-01-28T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.619070 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.619132 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.619144 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.619170 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.619185 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:58Z","lastTransitionTime":"2026-01-28T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.722386 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.722435 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.722448 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.722469 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.722485 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:58Z","lastTransitionTime":"2026-01-28T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.826175 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.826248 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.826277 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.826310 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.826333 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:58Z","lastTransitionTime":"2026-01-28T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.899005 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 02:16:09.917484049 +0000 UTC Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.914712 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:22:58 crc kubenswrapper[4804]: E0128 11:22:58.915145 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.929765 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.929811 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.929821 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.929839 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:58 crc kubenswrapper[4804]: I0128 11:22:58.929851 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:58Z","lastTransitionTime":"2026-01-28T11:22:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.033021 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.033068 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.033078 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.033094 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.033108 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:59Z","lastTransitionTime":"2026-01-28T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.136950 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.136997 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.137029 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.137049 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.137063 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:59Z","lastTransitionTime":"2026-01-28T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.239801 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.239850 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.239862 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.239902 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.239917 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:59Z","lastTransitionTime":"2026-01-28T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.343298 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.343360 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.343373 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.343388 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.343401 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:59Z","lastTransitionTime":"2026-01-28T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.447076 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.447170 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.447197 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.447229 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.447250 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:59Z","lastTransitionTime":"2026-01-28T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.551060 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.551128 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.551147 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.551176 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.551196 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:59Z","lastTransitionTime":"2026-01-28T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.653794 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.653839 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.653850 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.653875 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.653908 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:59Z","lastTransitionTime":"2026-01-28T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.756076 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.756120 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.756132 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.756150 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.756161 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:59Z","lastTransitionTime":"2026-01-28T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.857958 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.858004 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.858012 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.858025 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.858035 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:59Z","lastTransitionTime":"2026-01-28T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.899726 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 12:06:16.371302831 +0000 UTC Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.914065 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.914145 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.914065 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:22:59 crc kubenswrapper[4804]: E0128 11:22:59.914279 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:22:59 crc kubenswrapper[4804]: E0128 11:22:59.914382 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:22:59 crc kubenswrapper[4804]: E0128 11:22:59.914482 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.960890 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.960939 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.960948 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.960967 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:22:59 crc kubenswrapper[4804]: I0128 11:22:59.960978 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:22:59Z","lastTransitionTime":"2026-01-28T11:22:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.063513 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.063567 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.063586 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.063606 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.063619 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:00Z","lastTransitionTime":"2026-01-28T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.166635 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.166702 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.166720 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.166748 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.166765 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:00Z","lastTransitionTime":"2026-01-28T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.229007 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.242788 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.245483 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.261860 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.270575 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.270651 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.270668 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.270691 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.270709 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:00Z","lastTransitionTime":"2026-01-28T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.273109 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.284040 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.311361 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.326241 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.344606 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.360272 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.374107 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.374168 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.374183 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.374209 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.374223 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:00Z","lastTransitionTime":"2026-01-28T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.374837 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.393843 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.406213 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.423516 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.453823 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:50Z\\\",\\\"message\\\":\\\"gs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:50.819858 6458 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:50.820483 6458 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 11:22:50.821019 6458 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:50.821054 6458 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 11:22:50.821060 6458 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 11:22:50.821120 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:50.821153 6458 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:50.821163 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:50.821178 6458 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:50.821203 6458 factory.go:656] Stopping watch factory\\\\nI0128 11:22:50.821213 6458 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:50.821225 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:50.821236 6458 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.465601 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.478348 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.478386 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.478417 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.478439 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.478452 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:00Z","lastTransitionTime":"2026-01-28T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.479180 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.492381 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.504855 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:00Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.581339 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.581409 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.581424 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.581445 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.581464 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:00Z","lastTransitionTime":"2026-01-28T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.684660 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.684718 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.684732 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.684757 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.684777 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:00Z","lastTransitionTime":"2026-01-28T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.788085 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.788132 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.788144 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.788161 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.788174 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:00Z","lastTransitionTime":"2026-01-28T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.890726 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.891034 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.891171 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.891261 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.891332 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:00Z","lastTransitionTime":"2026-01-28T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.899847 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 05:12:03.837046259 +0000 UTC Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.914421 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:00 crc kubenswrapper[4804]: E0128 11:23:00.914584 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.993936 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.993978 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.993988 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.994004 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:00 crc kubenswrapper[4804]: I0128 11:23:00.994013 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:00Z","lastTransitionTime":"2026-01-28T11:23:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.096786 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.096830 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.096842 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.096860 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.096904 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.199291 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.199335 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.199345 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.199361 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.199373 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.302200 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.302255 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.302266 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.302282 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.302292 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.405209 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.405266 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.405277 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.405290 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.405301 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.406164 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.406190 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.406199 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.406208 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.406216 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: E0128 11:23:01.421405 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:01Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.425911 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.425992 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.426011 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.426033 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.426049 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: E0128 11:23:01.438038 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:01Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.441693 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.441748 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.441758 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.441771 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.441781 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: E0128 11:23:01.453013 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:01Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.456850 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.456902 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.456913 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.456926 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.456937 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: E0128 11:23:01.471833 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:01Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.478849 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.478975 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.479056 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.479144 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.479581 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: E0128 11:23:01.493987 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:01Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:01Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:01 crc kubenswrapper[4804]: E0128 11:23:01.494117 4804 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.507783 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.508060 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.508265 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.508454 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.508636 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.610916 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.610967 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.610979 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.610994 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.611005 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.713641 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.713674 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.713682 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.713696 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.713706 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.816893 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.816939 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.816951 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.816966 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.817013 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.900770 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 12:35:47.237237274 +0000 UTC Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.914079 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.914131 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.914322 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:01 crc kubenswrapper[4804]: E0128 11:23:01.914452 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:01 crc kubenswrapper[4804]: E0128 11:23:01.914605 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:01 crc kubenswrapper[4804]: E0128 11:23:01.914653 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.918730 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.918781 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.918790 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.918805 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:01 crc kubenswrapper[4804]: I0128 11:23:01.918816 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:01Z","lastTransitionTime":"2026-01-28T11:23:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.021111 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.021182 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.021195 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.021212 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.021221 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:02Z","lastTransitionTime":"2026-01-28T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.122959 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.122999 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.123011 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.123028 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.123039 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:02Z","lastTransitionTime":"2026-01-28T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.224963 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.224987 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.224996 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.225008 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.225017 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:02Z","lastTransitionTime":"2026-01-28T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.327225 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.327257 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.327265 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.327278 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.327288 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:02Z","lastTransitionTime":"2026-01-28T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.429652 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.429681 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.429689 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.429701 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.429710 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:02Z","lastTransitionTime":"2026-01-28T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.532187 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.532233 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.532243 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.532266 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.532278 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:02Z","lastTransitionTime":"2026-01-28T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.634583 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.634622 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.634632 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.634647 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.634660 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:02Z","lastTransitionTime":"2026-01-28T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.737204 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.737245 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.737253 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.737287 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.737301 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:02Z","lastTransitionTime":"2026-01-28T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.839403 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.839438 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.839449 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.839466 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.839477 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:02Z","lastTransitionTime":"2026-01-28T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.901050 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 21:25:39.941333278 +0000 UTC Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.914484 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:02 crc kubenswrapper[4804]: E0128 11:23:02.914794 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.941699 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.941727 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.941734 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.941746 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:02 crc kubenswrapper[4804]: I0128 11:23:02.941755 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:02Z","lastTransitionTime":"2026-01-28T11:23:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.044055 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.044093 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.044105 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.044129 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.044140 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:03Z","lastTransitionTime":"2026-01-28T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.149253 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.149328 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.149343 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.149363 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.149378 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:03Z","lastTransitionTime":"2026-01-28T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.252267 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.252319 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.252333 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.252350 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.252363 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:03Z","lastTransitionTime":"2026-01-28T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.354676 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.354707 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.354717 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.354731 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.354741 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:03Z","lastTransitionTime":"2026-01-28T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.456803 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.456839 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.456848 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.456861 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.456869 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:03Z","lastTransitionTime":"2026-01-28T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.559043 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.559082 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.559093 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.559109 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.559122 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:03Z","lastTransitionTime":"2026-01-28T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.661676 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.661726 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.661738 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.661753 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.661763 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:03Z","lastTransitionTime":"2026-01-28T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.764345 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.764377 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.764385 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.764418 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.764428 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:03Z","lastTransitionTime":"2026-01-28T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.866666 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.866707 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.866720 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.866738 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.866751 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:03Z","lastTransitionTime":"2026-01-28T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.902032 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 07:56:45.897331875 +0000 UTC Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.914530 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.914563 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.914557 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:03 crc kubenswrapper[4804]: E0128 11:23:03.914850 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:03 crc kubenswrapper[4804]: E0128 11:23:03.914993 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:03 crc kubenswrapper[4804]: E0128 11:23:03.915080 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.969726 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.969779 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.969792 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.969815 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:03 crc kubenswrapper[4804]: I0128 11:23:03.969831 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:03Z","lastTransitionTime":"2026-01-28T11:23:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.072599 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.072933 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.073008 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.073081 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.073147 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:04Z","lastTransitionTime":"2026-01-28T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.175736 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.175779 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.175790 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.175806 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.175820 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:04Z","lastTransitionTime":"2026-01-28T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.277569 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.277607 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.277616 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.277631 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.277642 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:04Z","lastTransitionTime":"2026-01-28T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.379648 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.379681 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.379689 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.379702 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.379711 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:04Z","lastTransitionTime":"2026-01-28T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.481918 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.481956 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.481965 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.481979 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.481989 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:04Z","lastTransitionTime":"2026-01-28T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.584280 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.584324 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.584333 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.584350 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.584361 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:04Z","lastTransitionTime":"2026-01-28T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.686590 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.686623 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.686632 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.686650 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.686663 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:04Z","lastTransitionTime":"2026-01-28T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.788638 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.788680 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.788693 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.788707 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.788718 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:04Z","lastTransitionTime":"2026-01-28T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.890867 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.890917 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.890927 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.890944 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.890954 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:04Z","lastTransitionTime":"2026-01-28T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.902175 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 19:53:28.646373014 +0000 UTC Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.914082 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:04 crc kubenswrapper[4804]: E0128 11:23:04.914248 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.929564 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:04Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.945264 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:04Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.961794 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:04Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.975545 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:04Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.987893 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:04Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.992560 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.992608 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.992648 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.992667 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:04 crc kubenswrapper[4804]: I0128 11:23:04.992678 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:04Z","lastTransitionTime":"2026-01-28T11:23:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.011023 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:50Z\\\",\\\"message\\\":\\\"gs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:50.819858 6458 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:50.820483 6458 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 11:22:50.821019 6458 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:50.821054 6458 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 11:22:50.821060 6458 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 11:22:50.821120 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:50.821153 6458 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:50.821163 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:50.821178 6458 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:50.821203 6458 factory.go:656] Stopping watch factory\\\\nI0128 11:22:50.821213 6458 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:50.821225 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:50.821236 6458 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.021555 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.033144 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.044550 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.055794 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.070382 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.083555 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.092863 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.095100 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.095139 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.095150 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.095168 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.095180 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:05Z","lastTransitionTime":"2026-01-28T11:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.104334 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.126833 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.138504 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83821b06-1780-492f-bc74-4dbb3369b083\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43bf8b668ba9a105df4e870222934240db37ba251c6a7a0edf2b1906f3ff986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2193a343b06edebc38cfa6baf9f72fe1872007ccd12e45f66b7b1bf514e3461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60bccbe754db806af285441aba84e70a6ab1207b062fc7ca63c03bc764cf659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.153249 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.164572 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:05Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.197627 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.197683 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.197696 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.197716 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.197728 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:05Z","lastTransitionTime":"2026-01-28T11:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.300125 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.300181 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.300193 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.300242 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.300255 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:05Z","lastTransitionTime":"2026-01-28T11:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.403305 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.403347 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.403358 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.403375 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.403386 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:05Z","lastTransitionTime":"2026-01-28T11:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.505758 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.505804 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.505817 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.505833 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.505843 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:05Z","lastTransitionTime":"2026-01-28T11:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.608797 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.608877 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.608922 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.608950 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.608974 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:05Z","lastTransitionTime":"2026-01-28T11:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.711589 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.711694 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.711715 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.711740 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.711759 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:05Z","lastTransitionTime":"2026-01-28T11:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.814992 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.815064 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.815078 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.815099 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.815113 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:05Z","lastTransitionTime":"2026-01-28T11:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.902655 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 02:42:58.281823731 +0000 UTC Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.914083 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.914124 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.914207 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:05 crc kubenswrapper[4804]: E0128 11:23:05.914279 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:05 crc kubenswrapper[4804]: E0128 11:23:05.914432 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:05 crc kubenswrapper[4804]: E0128 11:23:05.914564 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.916277 4804 scope.go:117] "RemoveContainer" containerID="5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177" Jan 28 11:23:05 crc kubenswrapper[4804]: E0128 11:23:05.916711 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\"" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.918458 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.918496 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.918504 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.918519 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:05 crc kubenswrapper[4804]: I0128 11:23:05.918529 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:05Z","lastTransitionTime":"2026-01-28T11:23:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.020514 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.020561 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.020575 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.020593 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.020608 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:06Z","lastTransitionTime":"2026-01-28T11:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.123480 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.123539 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.123554 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.123576 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.123595 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:06Z","lastTransitionTime":"2026-01-28T11:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.227147 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.227213 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.227232 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.227316 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.227338 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:06Z","lastTransitionTime":"2026-01-28T11:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.330431 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.330483 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.330493 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.330511 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.330523 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:06Z","lastTransitionTime":"2026-01-28T11:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.434004 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.434089 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.434102 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.434123 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.434138 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:06Z","lastTransitionTime":"2026-01-28T11:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.537687 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.537750 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.537796 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.537816 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.537831 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:06Z","lastTransitionTime":"2026-01-28T11:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.640938 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.640997 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.641012 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.641029 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.641042 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:06Z","lastTransitionTime":"2026-01-28T11:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.744219 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.744388 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.744410 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.744468 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.744494 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:06Z","lastTransitionTime":"2026-01-28T11:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.847987 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.848040 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.848056 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.848083 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.848100 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:06Z","lastTransitionTime":"2026-01-28T11:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.902847 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 21:32:43.812493474 +0000 UTC Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.914306 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:06 crc kubenswrapper[4804]: E0128 11:23:06.914474 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.951323 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.951363 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.951374 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.951391 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:06 crc kubenswrapper[4804]: I0128 11:23:06.951403 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:06Z","lastTransitionTime":"2026-01-28T11:23:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.054865 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.054939 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.054955 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.054976 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.054993 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:07Z","lastTransitionTime":"2026-01-28T11:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.158402 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.159059 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.159069 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.159085 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.159096 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:07Z","lastTransitionTime":"2026-01-28T11:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.267061 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.267122 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.267137 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.267163 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.267185 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:07Z","lastTransitionTime":"2026-01-28T11:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.371706 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.371754 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.371763 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.371777 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.371787 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:07Z","lastTransitionTime":"2026-01-28T11:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.475571 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.475603 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.475611 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.475624 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.475632 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:07Z","lastTransitionTime":"2026-01-28T11:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.578063 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.578107 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.578117 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.578132 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.578143 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:07Z","lastTransitionTime":"2026-01-28T11:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.680482 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.680521 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.680529 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.680545 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.680556 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:07Z","lastTransitionTime":"2026-01-28T11:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.783512 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.783567 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.783585 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.783617 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.783635 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:07Z","lastTransitionTime":"2026-01-28T11:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.886871 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.886923 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.886936 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.886955 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.886967 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:07Z","lastTransitionTime":"2026-01-28T11:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.903420 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 22:42:58.436851821 +0000 UTC Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.918527 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.918558 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.918581 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:07 crc kubenswrapper[4804]: E0128 11:23:07.918704 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:07 crc kubenswrapper[4804]: E0128 11:23:07.918820 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:07 crc kubenswrapper[4804]: E0128 11:23:07.918929 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.990140 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.990196 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.990210 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.990227 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:07 crc kubenswrapper[4804]: I0128 11:23:07.990240 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:07Z","lastTransitionTime":"2026-01-28T11:23:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.093275 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.093351 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.093374 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.093416 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.093441 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:08Z","lastTransitionTime":"2026-01-28T11:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.196064 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.196155 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.196174 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.196199 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.196215 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:08Z","lastTransitionTime":"2026-01-28T11:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.298208 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.298243 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.298256 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.298272 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.298283 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:08Z","lastTransitionTime":"2026-01-28T11:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.400671 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.400698 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.400706 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.400717 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.400725 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:08Z","lastTransitionTime":"2026-01-28T11:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.503378 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.503435 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.503451 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.503466 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.503477 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:08Z","lastTransitionTime":"2026-01-28T11:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.606191 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.606286 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.606306 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.606329 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.606346 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:08Z","lastTransitionTime":"2026-01-28T11:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.708764 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.708802 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.708814 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.708830 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.708843 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:08Z","lastTransitionTime":"2026-01-28T11:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.811703 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.811755 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.811769 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.811789 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.811803 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:08Z","lastTransitionTime":"2026-01-28T11:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.903920 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 12:16:56.903169486 +0000 UTC Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.913975 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:08 crc kubenswrapper[4804]: E0128 11:23:08.914100 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.914138 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.914180 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.914204 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.914230 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:08 crc kubenswrapper[4804]: I0128 11:23:08.914253 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:08Z","lastTransitionTime":"2026-01-28T11:23:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.016618 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.016669 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.016684 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.016705 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.016727 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:09Z","lastTransitionTime":"2026-01-28T11:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.119469 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.119529 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.119541 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.119558 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.119574 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:09Z","lastTransitionTime":"2026-01-28T11:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.223826 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.223954 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.224020 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.224057 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.224120 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:09Z","lastTransitionTime":"2026-01-28T11:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.326797 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.326837 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.326845 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.326861 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.326873 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:09Z","lastTransitionTime":"2026-01-28T11:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.430648 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.430714 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.430732 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.430758 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.430776 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:09Z","lastTransitionTime":"2026-01-28T11:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.533795 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.533844 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.533854 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.533874 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.533899 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:09Z","lastTransitionTime":"2026-01-28T11:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.636705 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.636767 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.636778 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.636802 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.636815 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:09Z","lastTransitionTime":"2026-01-28T11:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.739695 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.739760 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.739770 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.739790 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.739803 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:09Z","lastTransitionTime":"2026-01-28T11:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.842948 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.842996 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.843008 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.843024 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.843035 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:09Z","lastTransitionTime":"2026-01-28T11:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.904728 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 12:22:43.861605291 +0000 UTC Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.914184 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:09 crc kubenswrapper[4804]: E0128 11:23:09.914321 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.914528 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:09 crc kubenswrapper[4804]: E0128 11:23:09.914574 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.914670 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:09 crc kubenswrapper[4804]: E0128 11:23:09.914719 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.946002 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.946050 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.946061 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.946080 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:09 crc kubenswrapper[4804]: I0128 11:23:09.946092 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:09Z","lastTransitionTime":"2026-01-28T11:23:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.048364 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.048411 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.048421 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.048440 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.048453 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:10Z","lastTransitionTime":"2026-01-28T11:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.151704 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.151755 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.151766 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.151786 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.151799 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:10Z","lastTransitionTime":"2026-01-28T11:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.255468 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.255522 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.255532 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.255553 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.255565 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:10Z","lastTransitionTime":"2026-01-28T11:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.358051 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.358095 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.358106 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.358120 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.358151 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:10Z","lastTransitionTime":"2026-01-28T11:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.461673 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.461715 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.461724 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.461739 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.461752 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:10Z","lastTransitionTime":"2026-01-28T11:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.564707 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.564749 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.564760 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.564774 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.564786 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:10Z","lastTransitionTime":"2026-01-28T11:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.671825 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.671968 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.671993 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.672024 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.672058 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:10Z","lastTransitionTime":"2026-01-28T11:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.774698 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.774741 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.774751 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.774764 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.774774 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:10Z","lastTransitionTime":"2026-01-28T11:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.878179 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.878230 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.878243 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.878263 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.878276 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:10Z","lastTransitionTime":"2026-01-28T11:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.905770 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 00:27:19.373393545 +0000 UTC Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.914477 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:10 crc kubenswrapper[4804]: E0128 11:23:10.914711 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.929707 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:10 crc kubenswrapper[4804]: E0128 11:23:10.929997 4804 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:23:10 crc kubenswrapper[4804]: E0128 11:23:10.930140 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs podName:03844e8b-8d66-4cd7-aa19-51caa1407918 nodeName:}" failed. No retries permitted until 2026-01-28 11:23:42.930105995 +0000 UTC m=+98.724986009 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs") pod "network-metrics-daemon-bgqd8" (UID: "03844e8b-8d66-4cd7-aa19-51caa1407918") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.980807 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.980855 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.980866 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.980905 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:10 crc kubenswrapper[4804]: I0128 11:23:10.980919 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:10Z","lastTransitionTime":"2026-01-28T11:23:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.083047 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.083099 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.083115 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.083135 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.083149 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.185780 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.185833 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.185847 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.185871 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.185904 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.289589 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.289654 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.289670 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.289694 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.289708 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.392795 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.392864 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.392916 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.392956 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.392978 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.495808 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.495876 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.495926 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.495953 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.495971 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.599150 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.599238 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.599264 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.599299 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.599323 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.702162 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.702203 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.702216 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.702231 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.702242 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.727950 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.728013 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.728026 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.728050 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.728067 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: E0128 11:23:11.742298 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:11Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.747613 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.747665 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.747676 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.747698 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.747710 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: E0128 11:23:11.760682 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:11Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.768428 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.768566 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.768597 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.768648 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.768669 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: E0128 11:23:11.785495 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:11Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.790302 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.790341 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.790353 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.790370 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.790380 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: E0128 11:23:11.803459 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:11Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.807593 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.807634 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.807648 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.807669 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.807684 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: E0128 11:23:11.820480 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:11Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:11Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:11 crc kubenswrapper[4804]: E0128 11:23:11.820640 4804 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.822466 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.822496 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.822509 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.822530 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.822544 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.906670 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 19:39:46.303024347 +0000 UTC Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.914132 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.914182 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:11 crc kubenswrapper[4804]: E0128 11:23:11.914285 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:11 crc kubenswrapper[4804]: E0128 11:23:11.914436 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.914563 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:11 crc kubenswrapper[4804]: E0128 11:23:11.914700 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.925547 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.925590 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.925605 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.925627 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:11 crc kubenswrapper[4804]: I0128 11:23:11.925643 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:11Z","lastTransitionTime":"2026-01-28T11:23:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.028240 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.028288 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.028305 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.028326 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.028340 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:12Z","lastTransitionTime":"2026-01-28T11:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.130709 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.130755 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.130765 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.130780 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.130789 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:12Z","lastTransitionTime":"2026-01-28T11:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.233029 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.233068 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.233077 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.233093 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.233105 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:12Z","lastTransitionTime":"2026-01-28T11:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.335487 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.335519 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.335530 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.335546 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.335556 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:12Z","lastTransitionTime":"2026-01-28T11:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.438333 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.438364 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.438372 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.438384 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.438394 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:12Z","lastTransitionTime":"2026-01-28T11:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.541113 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.541149 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.541157 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.541171 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.541181 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:12Z","lastTransitionTime":"2026-01-28T11:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.644599 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.644662 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.644684 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.644710 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.644728 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:12Z","lastTransitionTime":"2026-01-28T11:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.748321 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.748362 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.748370 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.748385 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.748395 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:12Z","lastTransitionTime":"2026-01-28T11:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.850601 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.850647 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.850659 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.850676 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.850689 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:12Z","lastTransitionTime":"2026-01-28T11:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.907009 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 11:31:13.517359468 +0000 UTC Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.914331 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:12 crc kubenswrapper[4804]: E0128 11:23:12.914493 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.952694 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.952975 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.953091 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.953174 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:12 crc kubenswrapper[4804]: I0128 11:23:12.953253 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:12Z","lastTransitionTime":"2026-01-28T11:23:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.055780 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.056078 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.056261 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.056358 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.056440 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:13Z","lastTransitionTime":"2026-01-28T11:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.158926 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.158971 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.158984 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.159002 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.159014 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:13Z","lastTransitionTime":"2026-01-28T11:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.262022 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.262062 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.262072 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.262089 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.262099 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:13Z","lastTransitionTime":"2026-01-28T11:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.365350 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.365397 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.365419 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.365808 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.365861 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:13Z","lastTransitionTime":"2026-01-28T11:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.469265 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.469319 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.469338 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.469374 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.469394 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:13Z","lastTransitionTime":"2026-01-28T11:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.572504 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.572573 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.572586 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.572609 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.572624 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:13Z","lastTransitionTime":"2026-01-28T11:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.675251 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.675292 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.675303 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.675320 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.675331 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:13Z","lastTransitionTime":"2026-01-28T11:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.777829 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.777862 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.777871 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.777899 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.777909 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:13Z","lastTransitionTime":"2026-01-28T11:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.880078 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.880130 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.880143 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.880157 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.880168 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:13Z","lastTransitionTime":"2026-01-28T11:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.907169 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 11:07:09.637335046 +0000 UTC Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.914598 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.914642 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.914693 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:13 crc kubenswrapper[4804]: E0128 11:23:13.914799 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:13 crc kubenswrapper[4804]: E0128 11:23:13.914922 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:13 crc kubenswrapper[4804]: E0128 11:23:13.915011 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.983294 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.983336 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.983348 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.983366 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:13 crc kubenswrapper[4804]: I0128 11:23:13.983379 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:13Z","lastTransitionTime":"2026-01-28T11:23:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.085336 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.085382 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.085394 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.085435 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.085445 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:14Z","lastTransitionTime":"2026-01-28T11:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.187483 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.187514 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.187523 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.187540 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.187557 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:14Z","lastTransitionTime":"2026-01-28T11:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.289731 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.290021 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.290089 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.290172 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.290235 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:14Z","lastTransitionTime":"2026-01-28T11:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.302661 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqqmt_735b7edc-6f8b-4f5f-a9ca-11964dd78266/kube-multus/0.log" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.302779 4804 generic.go:334] "Generic (PLEG): container finished" podID="735b7edc-6f8b-4f5f-a9ca-11964dd78266" containerID="938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7" exitCode=1 Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.302856 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqqmt" event={"ID":"735b7edc-6f8b-4f5f-a9ca-11964dd78266","Type":"ContainerDied","Data":"938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7"} Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.303363 4804 scope.go:117] "RemoveContainer" containerID="938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.315421 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.328181 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.339360 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.354154 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.375043 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:50Z\\\",\\\"message\\\":\\\"gs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:50.819858 6458 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:50.820483 6458 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 11:22:50.821019 6458 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:50.821054 6458 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 11:22:50.821060 6458 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 11:22:50.821120 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:50.821153 6458 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:50.821163 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:50.821178 6458 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:50.821203 6458 factory.go:656] Stopping watch factory\\\\nI0128 11:22:50.821213 6458 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:50.821225 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:50.821236 6458 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.386957 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.395109 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.395152 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.395165 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.395183 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.395196 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:14Z","lastTransitionTime":"2026-01-28T11:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.412063 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.428959 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83821b06-1780-492f-bc74-4dbb3369b083\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43bf8b668ba9a105df4e870222934240db37ba251c6a7a0edf2b1906f3ff986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2193a343b06edebc38cfa6baf9f72fe1872007ccd12e45f66b7b1bf514e3461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60bccbe754db806af285441aba84e70a6ab1207b062fc7ca63c03bc764cf659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.446347 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.460269 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.476785 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.490145 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.497739 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.497786 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.497798 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.497816 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.497831 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:14Z","lastTransitionTime":"2026-01-28T11:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.505446 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.523389 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.544622 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.562102 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.579788 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:23:14Z\\\",\\\"message\\\":\\\"2026-01-28T11:22:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d\\\\n2026-01-28T11:22:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d to /host/opt/cni/bin/\\\\n2026-01-28T11:22:29Z [verbose] multus-daemon started\\\\n2026-01-28T11:22:29Z [verbose] Readiness Indicator file check\\\\n2026-01-28T11:23:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.597426 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.600367 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.600410 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.600422 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.600440 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.600454 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:14Z","lastTransitionTime":"2026-01-28T11:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.706673 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.706720 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.706736 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.706755 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.706765 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:14Z","lastTransitionTime":"2026-01-28T11:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.808853 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.809215 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.809227 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.809241 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.809250 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:14Z","lastTransitionTime":"2026-01-28T11:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.908237 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 12:54:31.926799745 +0000 UTC Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.912624 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.912695 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.912710 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.912732 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.912745 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:14Z","lastTransitionTime":"2026-01-28T11:23:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.915204 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:14 crc kubenswrapper[4804]: E0128 11:23:14.915355 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.932266 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.946795 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.961537 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.977973 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:23:14Z\\\",\\\"message\\\":\\\"2026-01-28T11:22:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d\\\\n2026-01-28T11:22:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d to /host/opt/cni/bin/\\\\n2026-01-28T11:22:29Z [verbose] multus-daemon started\\\\n2026-01-28T11:22:29Z [verbose] Readiness Indicator file check\\\\n2026-01-28T11:23:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:14 crc kubenswrapper[4804]: I0128 11:23:14.991774 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:14Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.005850 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.016191 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.016238 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.016252 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.016282 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.016294 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:15Z","lastTransitionTime":"2026-01-28T11:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.018502 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.028918 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.041801 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.059707 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:50Z\\\",\\\"message\\\":\\\"gs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:50.819858 6458 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:50.820483 6458 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 11:22:50.821019 6458 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:50.821054 6458 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 11:22:50.821060 6458 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 11:22:50.821120 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:50.821153 6458 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:50.821163 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:50.821178 6458 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:50.821203 6458 factory.go:656] Stopping watch factory\\\\nI0128 11:22:50.821213 6458 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:50.821225 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:50.821236 6458 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.072596 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.084838 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.107787 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.118266 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.118319 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.118329 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.118344 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.118373 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:15Z","lastTransitionTime":"2026-01-28T11:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.120680 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83821b06-1780-492f-bc74-4dbb3369b083\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43bf8b668ba9a105df4e870222934240db37ba251c6a7a0edf2b1906f3ff986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2193a343b06edebc38cfa6baf9f72fe1872007ccd12e45f66b7b1bf514e3461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60bccbe754db806af285441aba84e70a6ab1207b062fc7ca63c03bc764cf659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.133937 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.146745 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.163211 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.176722 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.220444 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.220498 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.220510 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.220528 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.220542 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:15Z","lastTransitionTime":"2026-01-28T11:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.307501 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqqmt_735b7edc-6f8b-4f5f-a9ca-11964dd78266/kube-multus/0.log" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.307562 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqqmt" event={"ID":"735b7edc-6f8b-4f5f-a9ca-11964dd78266","Type":"ContainerStarted","Data":"888abd8066feec1a58a78cfc0c77f1634db2fc87ed5237703a224ace3d78ee8d"} Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.323508 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.323561 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.323572 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.323589 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.323601 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:15Z","lastTransitionTime":"2026-01-28T11:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.326170 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:50Z\\\",\\\"message\\\":\\\"gs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:50.819858 6458 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:50.820483 6458 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 11:22:50.821019 6458 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:50.821054 6458 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 11:22:50.821060 6458 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 11:22:50.821120 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:50.821153 6458 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:50.821163 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:50.821178 6458 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:50.821203 6458 factory.go:656] Stopping watch factory\\\\nI0128 11:22:50.821213 6458 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:50.821225 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:50.821236 6458 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.337592 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.351255 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.364222 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.376066 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.390957 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.403337 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.414413 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.426587 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.426819 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.426834 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.426844 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.426857 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.426866 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:15Z","lastTransitionTime":"2026-01-28T11:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.447047 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.457851 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83821b06-1780-492f-bc74-4dbb3369b083\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43bf8b668ba9a105df4e870222934240db37ba251c6a7a0edf2b1906f3ff986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2193a343b06edebc38cfa6baf9f72fe1872007ccd12e45f66b7b1bf514e3461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60bccbe754db806af285441aba84e70a6ab1207b062fc7ca63c03bc764cf659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.470098 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.482216 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.494193 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.505114 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.516622 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.528415 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888abd8066feec1a58a78cfc0c77f1634db2fc87ed5237703a224ace3d78ee8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:23:14Z\\\",\\\"message\\\":\\\"2026-01-28T11:22:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d\\\\n2026-01-28T11:22:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d to /host/opt/cni/bin/\\\\n2026-01-28T11:22:29Z [verbose] multus-daemon started\\\\n2026-01-28T11:22:29Z [verbose] Readiness Indicator file check\\\\n2026-01-28T11:23:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:23:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.529149 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.529180 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.529190 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.529210 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.529222 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:15Z","lastTransitionTime":"2026-01-28T11:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.537613 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:15Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.631503 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.631539 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.631551 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.631568 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.631580 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:15Z","lastTransitionTime":"2026-01-28T11:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.734380 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.734429 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.734443 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.734463 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.734475 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:15Z","lastTransitionTime":"2026-01-28T11:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.836711 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.836742 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.836754 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.836771 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.836783 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:15Z","lastTransitionTime":"2026-01-28T11:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.909210 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 17:15:57.251324211 +0000 UTC Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.914281 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.914306 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.914281 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:15 crc kubenswrapper[4804]: E0128 11:23:15.914399 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:15 crc kubenswrapper[4804]: E0128 11:23:15.914463 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:15 crc kubenswrapper[4804]: E0128 11:23:15.914535 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.939176 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.939226 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.939238 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.939258 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:15 crc kubenswrapper[4804]: I0128 11:23:15.939271 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:15Z","lastTransitionTime":"2026-01-28T11:23:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.041390 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.041433 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.041450 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.041473 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.041489 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:16Z","lastTransitionTime":"2026-01-28T11:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.143543 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.143584 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.143596 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.143611 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.143623 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:16Z","lastTransitionTime":"2026-01-28T11:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.246378 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.246415 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.246426 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.246441 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.246452 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:16Z","lastTransitionTime":"2026-01-28T11:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.350056 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.350103 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.350117 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.350136 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.350150 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:16Z","lastTransitionTime":"2026-01-28T11:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.452911 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.452961 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.452972 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.452994 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.453006 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:16Z","lastTransitionTime":"2026-01-28T11:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.556324 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.556384 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.556396 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.556414 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.556859 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:16Z","lastTransitionTime":"2026-01-28T11:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.661894 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.661945 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.661962 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.661979 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.661992 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:16Z","lastTransitionTime":"2026-01-28T11:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.764398 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.764438 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.764452 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.764467 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.764479 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:16Z","lastTransitionTime":"2026-01-28T11:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.866655 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.866687 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.866695 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.866707 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.866716 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:16Z","lastTransitionTime":"2026-01-28T11:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.910347 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 03:52:23.951560439 +0000 UTC Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.914918 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:16 crc kubenswrapper[4804]: E0128 11:23:16.915175 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.969172 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.969240 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.969259 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.969284 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:16 crc kubenswrapper[4804]: I0128 11:23:16.969305 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:16Z","lastTransitionTime":"2026-01-28T11:23:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.071578 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.071621 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.071634 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.071647 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.071655 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:17Z","lastTransitionTime":"2026-01-28T11:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.173375 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.173419 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.173428 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.173445 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.173456 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:17Z","lastTransitionTime":"2026-01-28T11:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.276275 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.276315 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.276323 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.276338 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.276348 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:17Z","lastTransitionTime":"2026-01-28T11:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.379263 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.379297 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.379306 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.379320 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.379329 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:17Z","lastTransitionTime":"2026-01-28T11:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.481831 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.481876 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.481905 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.481925 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.481936 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:17Z","lastTransitionTime":"2026-01-28T11:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.584977 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.585048 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.585068 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.585091 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.585107 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:17Z","lastTransitionTime":"2026-01-28T11:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.686847 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.686903 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.686916 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.686934 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.686946 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:17Z","lastTransitionTime":"2026-01-28T11:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.789732 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.789810 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.789835 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.789873 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.789943 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:17Z","lastTransitionTime":"2026-01-28T11:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.892345 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.892402 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.892423 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.892449 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.892467 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:17Z","lastTransitionTime":"2026-01-28T11:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.910917 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 10:19:50.103932987 +0000 UTC Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.914299 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.914324 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.914312 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:17 crc kubenswrapper[4804]: E0128 11:23:17.914477 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:17 crc kubenswrapper[4804]: E0128 11:23:17.914676 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:17 crc kubenswrapper[4804]: E0128 11:23:17.914731 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.994920 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.994974 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.994996 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.995022 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:17 crc kubenswrapper[4804]: I0128 11:23:17.995041 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:17Z","lastTransitionTime":"2026-01-28T11:23:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.097778 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.097841 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.097850 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.097867 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.097877 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:18Z","lastTransitionTime":"2026-01-28T11:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.200530 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.200576 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.200588 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.200605 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.200614 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:18Z","lastTransitionTime":"2026-01-28T11:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.303957 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.304025 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.304046 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.304251 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.304285 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:18Z","lastTransitionTime":"2026-01-28T11:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.407075 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.407126 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.407140 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.407158 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.407170 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:18Z","lastTransitionTime":"2026-01-28T11:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.510174 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.510211 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.510221 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.510235 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.510245 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:18Z","lastTransitionTime":"2026-01-28T11:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.612342 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.612408 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.612426 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.612452 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.612469 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:18Z","lastTransitionTime":"2026-01-28T11:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.714959 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.715018 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.715030 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.715043 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.715053 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:18Z","lastTransitionTime":"2026-01-28T11:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.817259 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.817314 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.817331 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.817352 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.817367 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:18Z","lastTransitionTime":"2026-01-28T11:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.912015 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 10:49:27.772322745 +0000 UTC Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.914243 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:18 crc kubenswrapper[4804]: E0128 11:23:18.914569 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.920327 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.920434 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.920447 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.920466 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:18 crc kubenswrapper[4804]: I0128 11:23:18.920477 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:18Z","lastTransitionTime":"2026-01-28T11:23:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.022954 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.022995 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.023007 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.023027 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.023040 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:19Z","lastTransitionTime":"2026-01-28T11:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.126924 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.126959 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.126969 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.126985 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.126996 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:19Z","lastTransitionTime":"2026-01-28T11:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.229425 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.229476 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.229490 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.229508 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.229520 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:19Z","lastTransitionTime":"2026-01-28T11:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.332013 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.332059 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.332095 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.332113 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.332125 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:19Z","lastTransitionTime":"2026-01-28T11:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.434689 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.434725 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.434736 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.434751 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.434761 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:19Z","lastTransitionTime":"2026-01-28T11:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.537420 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.537460 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.537474 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.537490 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.537501 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:19Z","lastTransitionTime":"2026-01-28T11:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.639484 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.639522 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.639533 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.639550 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.639561 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:19Z","lastTransitionTime":"2026-01-28T11:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.742422 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.742477 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.742494 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.742518 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.742535 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:19Z","lastTransitionTime":"2026-01-28T11:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.844679 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.844721 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.844732 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.844746 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.844759 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:19Z","lastTransitionTime":"2026-01-28T11:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.913008 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 07:28:57.52845236 +0000 UTC Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.914488 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:19 crc kubenswrapper[4804]: E0128 11:23:19.914697 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.914800 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.914918 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:19 crc kubenswrapper[4804]: E0128 11:23:19.915108 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:19 crc kubenswrapper[4804]: E0128 11:23:19.915318 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.916571 4804 scope.go:117] "RemoveContainer" containerID="5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.947780 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.947842 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.947859 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.947906 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:19 crc kubenswrapper[4804]: I0128 11:23:19.947924 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:19Z","lastTransitionTime":"2026-01-28T11:23:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.051962 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.052026 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.052044 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.052073 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.052090 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:20Z","lastTransitionTime":"2026-01-28T11:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.156101 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.156147 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.156159 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.156180 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.156193 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:20Z","lastTransitionTime":"2026-01-28T11:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.259549 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.259578 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.259587 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.259600 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.259612 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:20Z","lastTransitionTime":"2026-01-28T11:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.362123 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.362175 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.362185 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.362201 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.362213 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:20Z","lastTransitionTime":"2026-01-28T11:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.470651 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.470726 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.470746 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.470777 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.470800 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:20Z","lastTransitionTime":"2026-01-28T11:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.574587 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.574651 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.574670 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.574699 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.574720 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:20Z","lastTransitionTime":"2026-01-28T11:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.677815 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.677878 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.677931 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.677967 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.677985 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:20Z","lastTransitionTime":"2026-01-28T11:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.780392 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.780426 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.780435 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.780450 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.780460 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:20Z","lastTransitionTime":"2026-01-28T11:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.882951 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.882990 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.882999 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.883014 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.883025 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:20Z","lastTransitionTime":"2026-01-28T11:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.913638 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 07:30:42.075548647 +0000 UTC Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.914988 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:20 crc kubenswrapper[4804]: E0128 11:23:20.915120 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.985046 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.985098 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.985108 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.985123 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:20 crc kubenswrapper[4804]: I0128 11:23:20.985150 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:20Z","lastTransitionTime":"2026-01-28T11:23:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.087816 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.087845 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.087853 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.087865 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.087874 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:21Z","lastTransitionTime":"2026-01-28T11:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.190614 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.190660 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.190670 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.190685 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.190696 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:21Z","lastTransitionTime":"2026-01-28T11:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.293289 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.293338 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.293349 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.293367 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.293379 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:21Z","lastTransitionTime":"2026-01-28T11:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.329002 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/3.log" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.329578 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/2.log" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.332409 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" exitCode=1 Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.332446 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e"} Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.332480 4804 scope.go:117] "RemoveContainer" containerID="5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.333587 4804 scope.go:117] "RemoveContainer" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:23:21 crc kubenswrapper[4804]: E0128 11:23:21.333850 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\"" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.348972 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.362575 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.374217 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.384741 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888abd8066feec1a58a78cfc0c77f1634db2fc87ed5237703a224ace3d78ee8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:23:14Z\\\",\\\"message\\\":\\\"2026-01-28T11:22:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d\\\\n2026-01-28T11:22:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d to /host/opt/cni/bin/\\\\n2026-01-28T11:22:29Z [verbose] multus-daemon started\\\\n2026-01-28T11:22:29Z [verbose] Readiness Indicator file check\\\\n2026-01-28T11:23:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:23:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.393900 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.395472 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.395499 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.395510 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.395573 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.395584 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:21Z","lastTransitionTime":"2026-01-28T11:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.403481 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.415214 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.428157 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.441360 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.453987 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.470837 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5cffc3ddba20015bd404930d01ac7730fe676ab7c1adb779da1012687f00f177\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:22:50Z\\\",\\\"message\\\":\\\"gs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0128 11:22:50.819858 6458 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0128 11:22:50.820483 6458 reflector.go:311] Stopping reflector *v1.Namespace (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0128 11:22:50.821019 6458 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0128 11:22:50.821054 6458 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0128 11:22:50.821060 6458 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0128 11:22:50.821120 6458 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0128 11:22:50.821153 6458 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0128 11:22:50.821163 6458 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0128 11:22:50.821178 6458 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0128 11:22:50.821203 6458 factory.go:656] Stopping watch factory\\\\nI0128 11:22:50.821213 6458 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0128 11:22:50.821225 6458 handler.go:208] Removed *v1.Node event handler 7\\\\nI0128 11:22:50.821236 6458 handler.go:208] Removed *v1.Node ev\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:50Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:23:21Z\\\",\\\"message\\\":\\\"ift for endpointslice openshift-authentication/oauth-openshift-7f7vm as it is not a known egress service\\\\nI0128 11:23:21.186777 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console-operator/metrics for endpointslice openshift-console-operator/metrics-7q466 as it is not a known egress service\\\\nI0128 11:23:21.186788 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console/console for endpointslice openshift-console/console-v8bv2 as it is not a known egress service\\\\nI0128 11:23:21.186793 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-dns-operator/metrics for endpointslice openshift-dns-operator/metrics-sh7kc as it is not a known egress service\\\\nI0128 11:23:21.186799 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-config-operator/machine-config-operator for endpointslice openshift-machine-config-operator/machine-config-operator-g8487 as it is not a known egress service\\\\nI0128 11:23:21.186832 6864 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.Namespace\\\\nI0128 11:23:21.186731 6864 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressIPNamespace\\\\nI0128 11:23:21.186973 6864 nad_controller.go:166] [zone-nad-controller NAD controller]: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:23:20Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.479934 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.489727 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.497722 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.497764 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.497774 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.497787 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.497799 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:21Z","lastTransitionTime":"2026-01-28T11:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.509123 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.518990 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83821b06-1780-492f-bc74-4dbb3369b083\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43bf8b668ba9a105df4e870222934240db37ba251c6a7a0edf2b1906f3ff986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2193a343b06edebc38cfa6baf9f72fe1872007ccd12e45f66b7b1bf514e3461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60bccbe754db806af285441aba84e70a6ab1207b062fc7ca63c03bc764cf659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.529471 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.540547 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.552242 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:21Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.600126 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.600169 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.600177 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.600192 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.600203 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:21Z","lastTransitionTime":"2026-01-28T11:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.701990 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.702024 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.702032 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.702046 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.702056 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:21Z","lastTransitionTime":"2026-01-28T11:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.804752 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.804807 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.804825 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.804849 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.804869 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:21Z","lastTransitionTime":"2026-01-28T11:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.908000 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.908065 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.908108 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.908136 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.908168 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:21Z","lastTransitionTime":"2026-01-28T11:23:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.913896 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-06 23:31:38.777098005 +0000 UTC Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.913989 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.914000 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:21 crc kubenswrapper[4804]: I0128 11:23:21.914052 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:21 crc kubenswrapper[4804]: E0128 11:23:21.914125 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:21 crc kubenswrapper[4804]: E0128 11:23:21.914201 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:21 crc kubenswrapper[4804]: E0128 11:23:21.914295 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.008803 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.008855 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.008869 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.008915 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.008929 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: E0128 11:23:22.026730 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:22Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.030823 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.030853 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.030865 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.030897 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.030911 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: E0128 11:23:22.043100 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:22Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.047755 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.047798 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.047834 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.047852 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.047865 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: E0128 11:23:22.062770 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:22Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.066316 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.066357 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.066368 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.066386 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.066398 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: E0128 11:23:22.079173 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:22Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.082906 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.082952 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.082964 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.082981 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.082993 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: E0128 11:23:22.096194 4804 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"7f97964b-a8a7-425a-af7c-d85a989338ac\\\",\\\"systemUUID\\\":\\\"c53b59f0-ae5a-4211-87ed-9529d3bdae0b\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:22Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:22 crc kubenswrapper[4804]: E0128 11:23:22.096355 4804 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.098073 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.098117 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.098134 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.098154 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.098166 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.201483 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.201529 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.201542 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.201564 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.201579 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.304164 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.304205 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.304214 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.304234 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.304249 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.338444 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/3.log" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.406699 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.406741 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.406750 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.406764 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.406774 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.509179 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.509217 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.509227 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.509243 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.509254 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.612323 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.612400 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.612419 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.612448 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.612462 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.714604 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.714644 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.714653 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.714667 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.714677 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.817960 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.817998 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.818008 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.818027 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.818040 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.914499 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-09 17:41:10.88088094 +0000 UTC Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.914660 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:22 crc kubenswrapper[4804]: E0128 11:23:22.914807 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.920734 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.920763 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.920774 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.920791 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.920803 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:22Z","lastTransitionTime":"2026-01-28T11:23:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.925564 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.927697 4804 scope.go:117] "RemoveContainer" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:23:22 crc kubenswrapper[4804]: E0128 11:23:22.927986 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\"" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.941805 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888abd8066feec1a58a78cfc0c77f1634db2fc87ed5237703a224ace3d78ee8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:23:14Z\\\",\\\"message\\\":\\\"2026-01-28T11:22:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d\\\\n2026-01-28T11:22:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d to /host/opt/cni/bin/\\\\n2026-01-28T11:22:29Z [verbose] multus-daemon started\\\\n2026-01-28T11:22:29Z [verbose] Readiness Indicator file check\\\\n2026-01-28T11:23:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:23:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:22Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.961723 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:22Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.977586 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:22Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:22 crc kubenswrapper[4804]: I0128 11:23:22.998224 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:22Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.013339 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.023430 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.023478 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.023494 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.023516 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.023532 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:23Z","lastTransitionTime":"2026-01-28T11:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.033741 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.057620 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:23:21Z\\\",\\\"message\\\":\\\"ift for endpointslice openshift-authentication/oauth-openshift-7f7vm as it is not a known egress service\\\\nI0128 11:23:21.186777 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console-operator/metrics for endpointslice openshift-console-operator/metrics-7q466 as it is not a known egress service\\\\nI0128 11:23:21.186788 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console/console for endpointslice openshift-console/console-v8bv2 as it is not a known egress service\\\\nI0128 11:23:21.186793 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-dns-operator/metrics for endpointslice openshift-dns-operator/metrics-sh7kc as it is not a known egress service\\\\nI0128 11:23:21.186799 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-config-operator/machine-config-operator for endpointslice openshift-machine-config-operator/machine-config-operator-g8487 as it is not a known egress service\\\\nI0128 11:23:21.186832 6864 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.Namespace\\\\nI0128 11:23:21.186731 6864 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressIPNamespace\\\\nI0128 11:23:21.186973 6864 nad_controller.go:166] [zone-nad-controller NAD controller]: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:23:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.076163 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.087537 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.109234 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.119795 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83821b06-1780-492f-bc74-4dbb3369b083\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43bf8b668ba9a105df4e870222934240db37ba251c6a7a0edf2b1906f3ff986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2193a343b06edebc38cfa6baf9f72fe1872007ccd12e45f66b7b1bf514e3461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60bccbe754db806af285441aba84e70a6ab1207b062fc7ca63c03bc764cf659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.125747 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.125788 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.125802 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.125822 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.125837 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:23Z","lastTransitionTime":"2026-01-28T11:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.131647 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.141799 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.153187 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.163182 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.175371 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.187645 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.199351 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:23Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.228052 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.228091 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.228103 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.228119 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.228132 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:23Z","lastTransitionTime":"2026-01-28T11:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.329908 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.330017 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.330031 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.330048 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.330058 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:23Z","lastTransitionTime":"2026-01-28T11:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.432361 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.432407 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.432417 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.432433 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.432444 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:23Z","lastTransitionTime":"2026-01-28T11:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.534956 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.534997 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.535007 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.535022 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.535032 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:23Z","lastTransitionTime":"2026-01-28T11:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.637965 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.638004 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.638015 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.638034 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.638074 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:23Z","lastTransitionTime":"2026-01-28T11:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.741038 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.741069 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.741080 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.741103 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.741114 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:23Z","lastTransitionTime":"2026-01-28T11:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.844524 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.844579 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.844595 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.844617 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.844633 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:23Z","lastTransitionTime":"2026-01-28T11:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.914372 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:23 crc kubenswrapper[4804]: E0128 11:23:23.914561 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.914629 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.914707 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 01:15:16.198951024 +0000 UTC Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.914686 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:23 crc kubenswrapper[4804]: E0128 11:23:23.914934 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:23 crc kubenswrapper[4804]: E0128 11:23:23.915029 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.948312 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.948369 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.948380 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.948403 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:23 crc kubenswrapper[4804]: I0128 11:23:23.948421 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:23Z","lastTransitionTime":"2026-01-28T11:23:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.051957 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.052009 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.052018 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.052032 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.052043 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:24Z","lastTransitionTime":"2026-01-28T11:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.154976 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.155048 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.155063 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.155089 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.155107 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:24Z","lastTransitionTime":"2026-01-28T11:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.257038 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.257087 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.257098 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.257116 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.257128 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:24Z","lastTransitionTime":"2026-01-28T11:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.360608 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.360697 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.360717 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.360744 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.360772 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:24Z","lastTransitionTime":"2026-01-28T11:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.464679 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.464740 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.464753 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.464778 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.464793 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:24Z","lastTransitionTime":"2026-01-28T11:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.568507 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.568581 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.568605 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.568638 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.568663 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:24Z","lastTransitionTime":"2026-01-28T11:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.671734 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.671792 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.671810 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.671835 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.671853 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:24Z","lastTransitionTime":"2026-01-28T11:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.775486 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.775522 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.775533 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.775551 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.775564 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:24Z","lastTransitionTime":"2026-01-28T11:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.879529 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.879601 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.879622 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.879650 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.879670 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:24Z","lastTransitionTime":"2026-01-28T11:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.914850 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.915871 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 10:39:02.677131783 +0000 UTC Jan 28 11:23:24 crc kubenswrapper[4804]: E0128 11:23:24.916038 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.938423 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6f080b6185e9c2c25359963bfd364f40deb62c67b06026ddad337d4c0abf5bd4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://52ac3e3ff0b6be34d1f8f4859c53169474a4efcbd60fa5f1e13e9afe7feb5dca\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:24Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.950960 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-r6hvc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8e616d20-36f4-4d59-9ce0-d2e18fd63902\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e39c804720aace85415aea76b65a6ee6bbd22998abc1958ebe8a276ac626f4fb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-9kmbz\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:24Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-r6hvc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:24Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.963768 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34e3d03d-371f-46d2-946a-6156c9570604\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:37Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://0f50f5672fb396b3bb1a2fc74842025610355e1471c3d52d664664d9afbb9a3b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://022ffd423e414eefcccabff54b0c0e9c4e5395a6dd17bb74f8b632d137620b59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-57lrq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:37Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-5jdhj\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:24Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.981982 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.982025 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.982039 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.982061 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.982076 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:24Z","lastTransitionTime":"2026-01-28T11:23:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:24 crc kubenswrapper[4804]: I0128 11:23:24.987172 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6576ee4f-4eb5-4b56-9a21-1ad48893d524\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e337aafc6e1207057978d1572b82aab2b18aba3d87ace8a93fe751866a8374e8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://135f72578ad7aac42aa37e100fd2b9a6ad8884926be9846bc6e36c3587c1edb0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2a0e85b03a2451dae7deef26ad6c2d12037f638e7733a2dd9dd6265fdaee418\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://fa8ba532efafe4192bfac1db98cf83a7d093268c42e601e6243b0b3ee64883ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f7fb87e24784c9ecf8bcf9b04663b203762a2d43b2b407da5de7152d07056d01\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://46e3c457d5249d96d84050f0d0937f39d654738c2c13a86eaeb1868e9943acef\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://458a0e0a8650c1cf0471bc5e0c848e89d4818a32591bb70a652b2ccf30842a2c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://58a6f57874394a8665d7280ad8bbe29f89b39937f8d7aa2ea45a43419c3cfc7c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:24Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.004141 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"83821b06-1780-492f-bc74-4dbb3369b083\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:00Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e43bf8b668ba9a105df4e870222934240db37ba251c6a7a0edf2b1906f3ff986\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a2193a343b06edebc38cfa6baf9f72fe1872007ccd12e45f66b7b1bf514e3461\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a60bccbe754db806af285441aba84e70a6ab1207b062fc7ca63c03bc764cf659\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7215d2221171871fa4d4015eb6bb62a78dcb8987c8a236667588ea85ebb565eb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.023658 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4d2e69c4b6efb5305dca5de79c676548455bb31fc13f0d74cd1789fbb776289c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:24Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.043329 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.065314 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3ed363e0-3913-4e5f-93a4-be30983b2c7d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-28T11:22:18Z\\\",\\\"message\\\":\\\"W0128 11:22:08.074273 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI0128 11:22:08.075057 1 crypto.go:601] Generating new CA for check-endpoints-signer@1769599328 cert, and key in /tmp/serving-cert-510811229/serving-signer.crt, /tmp/serving-cert-510811229/serving-signer.key\\\\nI0128 11:22:08.540602 1 observer_polling.go:159] Starting file observer\\\\nW0128 11:22:08.543232 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI0128 11:22:08.543470 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0128 11:22:08.544967 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-510811229/tls.crt::/tmp/serving-cert-510811229/tls.key\\\\\\\"\\\\nF0128 11:22:18.686476 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:08Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:07Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.085098 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.085330 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.085385 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.085399 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.085423 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.085440 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:25Z","lastTransitionTime":"2026-01-28T11:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.100636 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:23Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.116436 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-lqqmt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"735b7edc-6f8b-4f5f-a9ca-11964dd78266\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:23:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://888abd8066feec1a58a78cfc0c77f1634db2fc87ed5237703a224ace3d78ee8d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:23:14Z\\\",\\\"message\\\":\\\"2026-01-28T11:22:28+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d\\\\n2026-01-28T11:22:28+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_445d31a9-24c2-48bf-b7dc-6fabb016ac5d to /host/opt/cni/bin/\\\\n2026-01-28T11:22:29Z [verbose] multus-daemon started\\\\n2026-01-28T11:22:29Z [verbose] Readiness Indicator file check\\\\n2026-01-28T11:23:14Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:23:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-2vncf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-lqqmt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.127579 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"03844e8b-8d66-4cd7-aa19-51caa1407918\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:38Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zxhvp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:38Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-bgqd8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.154271 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"686039c6-ae16-45ac-bb9f-4c39d57d6c80\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-28T11:23:21Z\\\",\\\"message\\\":\\\"ift for endpointslice openshift-authentication/oauth-openshift-7f7vm as it is not a known egress service\\\\nI0128 11:23:21.186777 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console-operator/metrics for endpointslice openshift-console-operator/metrics-7q466 as it is not a known egress service\\\\nI0128 11:23:21.186788 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-console/console for endpointslice openshift-console/console-v8bv2 as it is not a known egress service\\\\nI0128 11:23:21.186793 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-dns-operator/metrics for endpointslice openshift-dns-operator/metrics-sh7kc as it is not a known egress service\\\\nI0128 11:23:21.186799 6864 egressservice_zone_endpointslice.go:80] Ignoring updating openshift-machine-config-operator/machine-config-operator for endpointslice openshift-machine-config-operator/machine-config-operator-g8487 as it is not a known egress service\\\\nI0128 11:23:21.186832 6864 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *v1.Namespace\\\\nI0128 11:23:21.186731 6864 obj_retry.go:439] Stop channel got triggered: will stop retrying failed objects of type *factory.egressIPNamespace\\\\nI0128 11:23:21.186973 6864 nad_controller.go:166] [zone-nad-controller NAD controller]: \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-28T11:23:20Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-55hnp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-24gvs\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.164176 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-v88kz" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"28d27942-1d0e-4433-a349-e1a404557705\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc5ec909351c29b0e7564ed056e09c5b56113e8590b8ce5676c935b68392778c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hv4wb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:31Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-v88kz\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.177730 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6e26ff64-b035-46cc-a97d-0e515afaa80e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9e793e859196b164d1a12e256d333048c57cccb7461eee62fbbf26e1e230a95b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://180e54a93b89623c6adb50939daecb2756ec11fcf25ff86c76df635eb3342f1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9910a55667e8cbd9e5e4e27108e6787eb996aa861aca20b5875872da12e682d7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:05Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.188144 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.188187 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.188198 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.188215 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.188226 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:25Z","lastTransitionTime":"2026-01-28T11:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.190497 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://1d1c5dcc938871acaf8e38c3a69eb224264f6b999f48ab5f201c1fd16c6a79ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.204455 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d901be89-84b0-4249-9548-2e626a112a4c\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:25Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d99437d94879a7d29074e552f490e3b6ec8ed392923ef705a6f4bda30332253c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-np5cs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-slkk8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.219599 4804 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"12825f11-ad6e-4db0-87b3-a619c0521c56\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:27Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-28T11:22:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6082c6a74525104023cf9a02f533e091682f4987c807d10c4d982b2c44d8a96a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-28T11:22:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://8230303b87a9fbf0ba8e9a69721161839aedc4d2557ca60248eab7629e8de6cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:26Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:26Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a23b3355a90262ea89d185f526045517b21a147a23bdba81271e1c90a308c489\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:28Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:27Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b7b79e33c158affbd07e711b172cb453fd89b96548f01a0c133956fd7b196326\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:29Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:29Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://00f7bbebe6c9138c4d47494c23b5ec271ac0450b6ca9cdc39127e18db4c420b0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:30Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:30Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb6e593f71f197d7c51daeae72aff39c0e03fae0743244b268370fab914ed3a8\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:31Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:31Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f18bc3b6914d5be68cdd1c8decf89df6fca211d9e3f7392c5feea6459adfe108\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-28T11:22:32Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-28T11:22:32Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-7bdbf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-28T11:22:25Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rm9ff\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-28T11:23:25Z is after 2025-08-24T17:21:41Z" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.290636 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.290686 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.290698 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.290716 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.290729 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:25Z","lastTransitionTime":"2026-01-28T11:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.393558 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.393604 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.393617 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.393633 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.393645 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:25Z","lastTransitionTime":"2026-01-28T11:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.495443 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.495491 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.495501 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.495517 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.495526 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:25Z","lastTransitionTime":"2026-01-28T11:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.597662 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.597721 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.597730 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.597747 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.597756 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:25Z","lastTransitionTime":"2026-01-28T11:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.700037 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.700130 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.700154 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.700184 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.700207 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:25Z","lastTransitionTime":"2026-01-28T11:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.802924 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.802992 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.803007 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.803038 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.803054 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:25Z","lastTransitionTime":"2026-01-28T11:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.905902 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.905948 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.905962 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.905981 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.905992 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:25Z","lastTransitionTime":"2026-01-28T11:23:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.914110 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.914249 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:25 crc kubenswrapper[4804]: E0128 11:23:25.914316 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.914405 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:25 crc kubenswrapper[4804]: E0128 11:23:25.914556 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:25 crc kubenswrapper[4804]: E0128 11:23:25.914687 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:25 crc kubenswrapper[4804]: I0128 11:23:25.916849 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 22:49:18.671799004 +0000 UTC Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.008334 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.008400 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.008415 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.008435 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.008448 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:26Z","lastTransitionTime":"2026-01-28T11:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.111025 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.111067 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.111074 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.111090 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.111100 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:26Z","lastTransitionTime":"2026-01-28T11:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.213775 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.213829 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.213848 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.213873 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.213935 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:26Z","lastTransitionTime":"2026-01-28T11:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.316431 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.316486 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.316502 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.316523 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.316539 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:26Z","lastTransitionTime":"2026-01-28T11:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.419233 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.419297 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.419313 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.419331 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.419343 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:26Z","lastTransitionTime":"2026-01-28T11:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.522162 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.522202 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.522211 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.522225 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.522234 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:26Z","lastTransitionTime":"2026-01-28T11:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.624204 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.624248 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.624256 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.624303 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.624315 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:26Z","lastTransitionTime":"2026-01-28T11:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.726730 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.726777 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.726789 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.726805 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.726817 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:26Z","lastTransitionTime":"2026-01-28T11:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.828514 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.828551 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.828560 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.828575 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.828585 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:26Z","lastTransitionTime":"2026-01-28T11:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.914227 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:26 crc kubenswrapper[4804]: E0128 11:23:26.914376 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.917553 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 23:43:53.820232734 +0000 UTC Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.931240 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.931283 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.931294 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.931305 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:26 crc kubenswrapper[4804]: I0128 11:23:26.931314 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:26Z","lastTransitionTime":"2026-01-28T11:23:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.033290 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.033326 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.033336 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.033350 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.033361 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:27Z","lastTransitionTime":"2026-01-28T11:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.135472 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.135504 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.135512 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.135526 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.135536 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:27Z","lastTransitionTime":"2026-01-28T11:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.238315 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.238384 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.238396 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.238419 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.238431 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:27Z","lastTransitionTime":"2026-01-28T11:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.341130 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.341168 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.341180 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.341195 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.341206 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:27Z","lastTransitionTime":"2026-01-28T11:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.443940 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.443999 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.444019 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.444042 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.444059 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:27Z","lastTransitionTime":"2026-01-28T11:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.546258 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.546300 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.546312 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.546331 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.546344 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:27Z","lastTransitionTime":"2026-01-28T11:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.648585 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.648643 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.648659 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.648685 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.648702 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:27Z","lastTransitionTime":"2026-01-28T11:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.751072 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.751123 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.751136 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.751152 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.751164 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:27Z","lastTransitionTime":"2026-01-28T11:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.803014 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.803117 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.803199 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.80317443 +0000 UTC m=+147.598054424 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.803204 4804 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.803256 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.803244092 +0000 UTC m=+147.598124086 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.853789 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.853851 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.853868 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.853937 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.853956 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:27Z","lastTransitionTime":"2026-01-28T11:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.903979 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.904040 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.904082 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.904176 4804 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.904181 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.904191 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.904211 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.904222 4804 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.904230 4804 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.904237 4804 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.904238 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.904222923 +0000 UTC m=+147.699102917 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.904297 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.904280035 +0000 UTC m=+147.699160039 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.904330 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.904318436 +0000 UTC m=+147.699198440 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.914062 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.914106 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.914079 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.914204 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.914443 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:27 crc kubenswrapper[4804]: E0128 11:23:27.914534 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.918153 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 23:55:17.651089784 +0000 UTC Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.927092 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.958163 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.958196 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.958205 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.958219 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:27 crc kubenswrapper[4804]: I0128 11:23:27.958232 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:27Z","lastTransitionTime":"2026-01-28T11:23:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.060677 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.060742 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.060755 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.060775 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.060789 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:28Z","lastTransitionTime":"2026-01-28T11:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.163181 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.163248 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.163266 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.163292 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.163311 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:28Z","lastTransitionTime":"2026-01-28T11:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.266649 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.266706 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.266724 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.266754 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.266778 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:28Z","lastTransitionTime":"2026-01-28T11:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.370305 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.370613 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.370621 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.370634 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.370644 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:28Z","lastTransitionTime":"2026-01-28T11:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.475958 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.476039 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.476066 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.476089 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.476107 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:28Z","lastTransitionTime":"2026-01-28T11:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.579862 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.579927 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.579944 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.579966 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.579977 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:28Z","lastTransitionTime":"2026-01-28T11:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.683357 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.683418 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.683432 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.683465 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.683486 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:28Z","lastTransitionTime":"2026-01-28T11:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.786610 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.786645 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.786656 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.786672 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.786691 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:28Z","lastTransitionTime":"2026-01-28T11:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.889607 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.889659 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.889682 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.889705 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.889720 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:28Z","lastTransitionTime":"2026-01-28T11:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.914539 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:28 crc kubenswrapper[4804]: E0128 11:23:28.914870 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.918250 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 16:55:50.393183484 +0000 UTC Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.992800 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.992906 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.992923 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.992943 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:28 crc kubenswrapper[4804]: I0128 11:23:28.992955 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:28Z","lastTransitionTime":"2026-01-28T11:23:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.095512 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.095553 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.095562 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.095577 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.095589 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:29Z","lastTransitionTime":"2026-01-28T11:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.197326 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.197360 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.197368 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.197382 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.197391 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:29Z","lastTransitionTime":"2026-01-28T11:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.299992 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.300035 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.300045 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.300058 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.300069 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:29Z","lastTransitionTime":"2026-01-28T11:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.402315 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.402354 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.402366 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.402384 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.402397 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:29Z","lastTransitionTime":"2026-01-28T11:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.504481 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.504518 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.504529 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.504544 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.504554 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:29Z","lastTransitionTime":"2026-01-28T11:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.607029 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.607065 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.607077 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.607093 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.607106 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:29Z","lastTransitionTime":"2026-01-28T11:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.709218 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.709266 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.709277 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.709295 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.709309 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:29Z","lastTransitionTime":"2026-01-28T11:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.812023 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.812087 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.812098 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.812116 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.812129 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:29Z","lastTransitionTime":"2026-01-28T11:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.913988 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.914024 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.913988 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:29 crc kubenswrapper[4804]: E0128 11:23:29.914180 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:29 crc kubenswrapper[4804]: E0128 11:23:29.914268 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:29 crc kubenswrapper[4804]: E0128 11:23:29.914441 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.915079 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.915115 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.915129 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.915146 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.915159 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:29Z","lastTransitionTime":"2026-01-28T11:23:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:29 crc kubenswrapper[4804]: I0128 11:23:29.918549 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 14:37:43.572632266 +0000 UTC Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.017920 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.017980 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.018072 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.018103 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.018207 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:30Z","lastTransitionTime":"2026-01-28T11:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.121124 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.121199 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.121228 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.121262 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.121288 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:30Z","lastTransitionTime":"2026-01-28T11:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.224808 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.224918 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.224934 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.224951 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.224965 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:30Z","lastTransitionTime":"2026-01-28T11:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.327474 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.327523 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.327539 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.327560 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.327576 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:30Z","lastTransitionTime":"2026-01-28T11:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.429785 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.429825 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.429834 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.429850 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.429863 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:30Z","lastTransitionTime":"2026-01-28T11:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.531444 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.531479 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.531488 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.531502 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.531511 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:30Z","lastTransitionTime":"2026-01-28T11:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.633814 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.633875 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.633901 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.633917 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.633931 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:30Z","lastTransitionTime":"2026-01-28T11:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.735610 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.735646 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.735659 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.735672 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.735683 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:30Z","lastTransitionTime":"2026-01-28T11:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.838516 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.838590 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.838610 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.838641 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.838659 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:30Z","lastTransitionTime":"2026-01-28T11:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.914188 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:30 crc kubenswrapper[4804]: E0128 11:23:30.914346 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.919109 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 16:40:35.217958687 +0000 UTC Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.942107 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.942146 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.942156 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.942172 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:30 crc kubenswrapper[4804]: I0128 11:23:30.942182 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:30Z","lastTransitionTime":"2026-01-28T11:23:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.044571 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.044607 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.044615 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.044629 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.044638 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:31Z","lastTransitionTime":"2026-01-28T11:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.148104 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.148157 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.148169 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.148187 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.148199 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:31Z","lastTransitionTime":"2026-01-28T11:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.251543 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.251583 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.251592 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.251608 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.251620 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:31Z","lastTransitionTime":"2026-01-28T11:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.355077 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.355158 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.355178 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.355205 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.355225 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:31Z","lastTransitionTime":"2026-01-28T11:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.460869 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.461691 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.461738 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.461985 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.462048 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:31Z","lastTransitionTime":"2026-01-28T11:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.565060 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.565126 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.565155 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.565185 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.565210 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:31Z","lastTransitionTime":"2026-01-28T11:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.667704 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.667772 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.667783 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.667800 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.667811 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:31Z","lastTransitionTime":"2026-01-28T11:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.770759 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.770803 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.770829 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.770850 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.770867 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:31Z","lastTransitionTime":"2026-01-28T11:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.873986 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.874034 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.874049 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.874070 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.874087 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:31Z","lastTransitionTime":"2026-01-28T11:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.914582 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:31 crc kubenswrapper[4804]: E0128 11:23:31.914761 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.915043 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:31 crc kubenswrapper[4804]: E0128 11:23:31.915112 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.915212 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:31 crc kubenswrapper[4804]: E0128 11:23:31.915393 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.919462 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 03:23:54.513521448 +0000 UTC Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.978088 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.978361 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.978606 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.978806 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:31 crc kubenswrapper[4804]: I0128 11:23:31.979013 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:31Z","lastTransitionTime":"2026-01-28T11:23:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.082590 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.083279 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.083495 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.083728 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.083941 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:32Z","lastTransitionTime":"2026-01-28T11:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.187678 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.187749 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.187770 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.187799 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.187822 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:32Z","lastTransitionTime":"2026-01-28T11:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.291048 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.292061 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.292268 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.292434 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.292562 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:32Z","lastTransitionTime":"2026-01-28T11:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.395166 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.395215 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.395228 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.395247 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.395262 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:32Z","lastTransitionTime":"2026-01-28T11:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.400778 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.400837 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.400849 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.400865 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.400889 4804 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-28T11:23:32Z","lastTransitionTime":"2026-01-28T11:23:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.459117 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj"] Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.461578 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.464652 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.464791 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.464832 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.465182 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.516444 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=69.5164198 podStartE2EDuration="1m9.5164198s" podCreationTimestamp="2026-01-28 11:22:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:32.503270054 +0000 UTC m=+88.298150048" watchObservedRunningTime="2026-01-28 11:23:32.5164198 +0000 UTC m=+88.311299804" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.517202 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=32.517192494 podStartE2EDuration="32.517192494s" podCreationTimestamp="2026-01-28 11:23:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:32.516902665 +0000 UTC m=+88.311782649" watchObservedRunningTime="2026-01-28 11:23:32.517192494 +0000 UTC m=+88.312072498" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.550018 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4eced286-7c46-4520-99c1-b8b7225d9c72-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.550081 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4eced286-7c46-4520-99c1-b8b7225d9c72-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.550114 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4eced286-7c46-4520-99c1-b8b7225d9c72-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.550201 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4eced286-7c46-4520-99c1-b8b7225d9c72-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.550729 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eced286-7c46-4520-99c1-b8b7225d9c72-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.598702 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-r6hvc" podStartSLOduration=68.598625361 podStartE2EDuration="1m8.598625361s" podCreationTimestamp="2026-01-28 11:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:32.579582933 +0000 UTC m=+88.374462927" watchObservedRunningTime="2026-01-28 11:23:32.598625361 +0000 UTC m=+88.393505355" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.616711 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-5jdhj" podStartSLOduration=66.616676259 podStartE2EDuration="1m6.616676259s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:32.599632592 +0000 UTC m=+88.394512596" watchObservedRunningTime="2026-01-28 11:23:32.616676259 +0000 UTC m=+88.411556243" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.631574 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=68.631547598 podStartE2EDuration="1m8.631547598s" podCreationTimestamp="2026-01-28 11:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:32.617750482 +0000 UTC m=+88.412630466" watchObservedRunningTime="2026-01-28 11:23:32.631547598 +0000 UTC m=+88.426427582" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.652109 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4eced286-7c46-4520-99c1-b8b7225d9c72-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.652176 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eced286-7c46-4520-99c1-b8b7225d9c72-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.652231 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4eced286-7c46-4520-99c1-b8b7225d9c72-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.652256 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4eced286-7c46-4520-99c1-b8b7225d9c72-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.652284 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4eced286-7c46-4520-99c1-b8b7225d9c72-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.652362 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/4eced286-7c46-4520-99c1-b8b7225d9c72-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.652677 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/4eced286-7c46-4520-99c1-b8b7225d9c72-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.653332 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4eced286-7c46-4520-99c1-b8b7225d9c72-service-ca\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.673908 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4eced286-7c46-4520-99c1-b8b7225d9c72-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.676683 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4eced286-7c46-4520-99c1-b8b7225d9c72-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-n7chj\" (UID: \"4eced286-7c46-4520-99c1-b8b7225d9c72\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.689098 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=5.689071476 podStartE2EDuration="5.689071476s" podCreationTimestamp="2026-01-28 11:23:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:32.665790456 +0000 UTC m=+88.460670430" watchObservedRunningTime="2026-01-28 11:23:32.689071476 +0000 UTC m=+88.483951460" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.689638 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-lqqmt" podStartSLOduration=67.689633333 podStartE2EDuration="1m7.689633333s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:32.689071956 +0000 UTC m=+88.483951960" watchObservedRunningTime="2026-01-28 11:23:32.689633333 +0000 UTC m=+88.484513317" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.744232 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=68.74421632 podStartE2EDuration="1m8.74421632s" podCreationTimestamp="2026-01-28 11:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:32.725259084 +0000 UTC m=+88.520139058" watchObservedRunningTime="2026-01-28 11:23:32.74421632 +0000 UTC m=+88.539096304" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.758268 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podStartSLOduration=67.758244743 podStartE2EDuration="1m7.758244743s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:32.757787079 +0000 UTC m=+88.552667073" watchObservedRunningTime="2026-01-28 11:23:32.758244743 +0000 UTC m=+88.553124727" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.777417 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.780342 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rm9ff" podStartSLOduration=67.780329366 podStartE2EDuration="1m7.780329366s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:32.780167491 +0000 UTC m=+88.575047475" watchObservedRunningTime="2026-01-28 11:23:32.780329366 +0000 UTC m=+88.575209350" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.825164 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-v88kz" podStartSLOduration=66.825140161 podStartE2EDuration="1m6.825140161s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:32.824250363 +0000 UTC m=+88.619130347" watchObservedRunningTime="2026-01-28 11:23:32.825140161 +0000 UTC m=+88.620020145" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.914213 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:32 crc kubenswrapper[4804]: E0128 11:23:32.914438 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.920159 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 20:05:17.292351006 +0000 UTC Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.920247 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 28 11:23:32 crc kubenswrapper[4804]: I0128 11:23:32.929937 4804 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 28 11:23:33 crc kubenswrapper[4804]: I0128 11:23:33.379324 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" event={"ID":"4eced286-7c46-4520-99c1-b8b7225d9c72","Type":"ContainerStarted","Data":"9be897b264fbbca8adb89640387e9e927dc46a4f873e9762d31504daa3efc583"} Jan 28 11:23:33 crc kubenswrapper[4804]: I0128 11:23:33.379373 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" event={"ID":"4eced286-7c46-4520-99c1-b8b7225d9c72","Type":"ContainerStarted","Data":"4692fd01b10e57950b4b3dc2f5edd63e8e7f02bf76ca3f18c1bdfe29a3880520"} Jan 28 11:23:33 crc kubenswrapper[4804]: I0128 11:23:33.914538 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:33 crc kubenswrapper[4804]: I0128 11:23:33.914554 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:33 crc kubenswrapper[4804]: I0128 11:23:33.914555 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:33 crc kubenswrapper[4804]: E0128 11:23:33.914858 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:33 crc kubenswrapper[4804]: E0128 11:23:33.914947 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:33 crc kubenswrapper[4804]: E0128 11:23:33.915070 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:34 crc kubenswrapper[4804]: I0128 11:23:34.914601 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:34 crc kubenswrapper[4804]: E0128 11:23:34.915593 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:35 crc kubenswrapper[4804]: I0128 11:23:35.913927 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:35 crc kubenswrapper[4804]: I0128 11:23:35.914245 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:35 crc kubenswrapper[4804]: E0128 11:23:35.914235 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:35 crc kubenswrapper[4804]: I0128 11:23:35.914338 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:35 crc kubenswrapper[4804]: E0128 11:23:35.914639 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:35 crc kubenswrapper[4804]: E0128 11:23:35.914817 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:36 crc kubenswrapper[4804]: I0128 11:23:36.914366 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:36 crc kubenswrapper[4804]: E0128 11:23:36.914558 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:36 crc kubenswrapper[4804]: I0128 11:23:36.915325 4804 scope.go:117] "RemoveContainer" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:23:36 crc kubenswrapper[4804]: E0128 11:23:36.915470 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\"" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" Jan 28 11:23:37 crc kubenswrapper[4804]: I0128 11:23:37.914760 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:37 crc kubenswrapper[4804]: I0128 11:23:37.914907 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:37 crc kubenswrapper[4804]: E0128 11:23:37.914992 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:37 crc kubenswrapper[4804]: I0128 11:23:37.915113 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:37 crc kubenswrapper[4804]: E0128 11:23:37.915264 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:37 crc kubenswrapper[4804]: E0128 11:23:37.915496 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:38 crc kubenswrapper[4804]: I0128 11:23:38.915000 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:38 crc kubenswrapper[4804]: E0128 11:23:38.915297 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:39 crc kubenswrapper[4804]: I0128 11:23:39.914863 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:39 crc kubenswrapper[4804]: I0128 11:23:39.914904 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:39 crc kubenswrapper[4804]: I0128 11:23:39.915047 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:39 crc kubenswrapper[4804]: E0128 11:23:39.915096 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:39 crc kubenswrapper[4804]: E0128 11:23:39.915086 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:39 crc kubenswrapper[4804]: E0128 11:23:39.915543 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:40 crc kubenswrapper[4804]: I0128 11:23:40.914573 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:40 crc kubenswrapper[4804]: E0128 11:23:40.914866 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:41 crc kubenswrapper[4804]: I0128 11:23:41.914770 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:41 crc kubenswrapper[4804]: I0128 11:23:41.914770 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:41 crc kubenswrapper[4804]: E0128 11:23:41.914960 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:41 crc kubenswrapper[4804]: E0128 11:23:41.915006 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:41 crc kubenswrapper[4804]: I0128 11:23:41.914815 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:41 crc kubenswrapper[4804]: E0128 11:23:41.915066 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:42 crc kubenswrapper[4804]: I0128 11:23:42.914272 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:42 crc kubenswrapper[4804]: E0128 11:23:42.914528 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:42 crc kubenswrapper[4804]: I0128 11:23:42.962647 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:42 crc kubenswrapper[4804]: E0128 11:23:42.962865 4804 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:23:42 crc kubenswrapper[4804]: E0128 11:23:42.962991 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs podName:03844e8b-8d66-4cd7-aa19-51caa1407918 nodeName:}" failed. No retries permitted until 2026-01-28 11:24:46.962965572 +0000 UTC m=+162.757845596 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs") pod "network-metrics-daemon-bgqd8" (UID: "03844e8b-8d66-4cd7-aa19-51caa1407918") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 28 11:23:43 crc kubenswrapper[4804]: I0128 11:23:43.914286 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:43 crc kubenswrapper[4804]: I0128 11:23:43.914324 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:43 crc kubenswrapper[4804]: E0128 11:23:43.914623 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:43 crc kubenswrapper[4804]: I0128 11:23:43.914719 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:43 crc kubenswrapper[4804]: E0128 11:23:43.914862 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:43 crc kubenswrapper[4804]: E0128 11:23:43.915073 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:44 crc kubenswrapper[4804]: I0128 11:23:44.914141 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:44 crc kubenswrapper[4804]: E0128 11:23:44.916181 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:45 crc kubenswrapper[4804]: I0128 11:23:45.914156 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:45 crc kubenswrapper[4804]: I0128 11:23:45.914220 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:45 crc kubenswrapper[4804]: E0128 11:23:45.914295 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:45 crc kubenswrapper[4804]: I0128 11:23:45.914231 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:45 crc kubenswrapper[4804]: E0128 11:23:45.914360 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:45 crc kubenswrapper[4804]: E0128 11:23:45.914484 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:46 crc kubenswrapper[4804]: I0128 11:23:46.914678 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:46 crc kubenswrapper[4804]: E0128 11:23:46.914995 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:47 crc kubenswrapper[4804]: I0128 11:23:47.915223 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:47 crc kubenswrapper[4804]: I0128 11:23:47.915376 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:47 crc kubenswrapper[4804]: I0128 11:23:47.915238 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:47 crc kubenswrapper[4804]: E0128 11:23:47.915485 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:47 crc kubenswrapper[4804]: E0128 11:23:47.915659 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:47 crc kubenswrapper[4804]: E0128 11:23:47.915845 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:48 crc kubenswrapper[4804]: I0128 11:23:48.914281 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:48 crc kubenswrapper[4804]: E0128 11:23:48.915071 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:48 crc kubenswrapper[4804]: I0128 11:23:48.916219 4804 scope.go:117] "RemoveContainer" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:23:48 crc kubenswrapper[4804]: E0128 11:23:48.916482 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\"" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" Jan 28 11:23:49 crc kubenswrapper[4804]: I0128 11:23:49.914770 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:49 crc kubenswrapper[4804]: I0128 11:23:49.914862 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:49 crc kubenswrapper[4804]: I0128 11:23:49.915050 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:49 crc kubenswrapper[4804]: E0128 11:23:49.915187 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:49 crc kubenswrapper[4804]: E0128 11:23:49.915315 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:49 crc kubenswrapper[4804]: E0128 11:23:49.915452 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:50 crc kubenswrapper[4804]: I0128 11:23:50.914678 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:50 crc kubenswrapper[4804]: E0128 11:23:50.914977 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:51 crc kubenswrapper[4804]: I0128 11:23:51.915080 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:51 crc kubenswrapper[4804]: E0128 11:23:51.915275 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:51 crc kubenswrapper[4804]: I0128 11:23:51.915619 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:51 crc kubenswrapper[4804]: E0128 11:23:51.915708 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:51 crc kubenswrapper[4804]: I0128 11:23:51.916053 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:51 crc kubenswrapper[4804]: E0128 11:23:51.916223 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:52 crc kubenswrapper[4804]: I0128 11:23:52.914268 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:52 crc kubenswrapper[4804]: E0128 11:23:52.914700 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:53 crc kubenswrapper[4804]: I0128 11:23:53.915035 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:53 crc kubenswrapper[4804]: I0128 11:23:53.915084 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:53 crc kubenswrapper[4804]: E0128 11:23:53.915177 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:53 crc kubenswrapper[4804]: I0128 11:23:53.915197 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:53 crc kubenswrapper[4804]: E0128 11:23:53.915273 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:53 crc kubenswrapper[4804]: E0128 11:23:53.915337 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:54 crc kubenswrapper[4804]: I0128 11:23:54.914370 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:54 crc kubenswrapper[4804]: E0128 11:23:54.916468 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:55 crc kubenswrapper[4804]: I0128 11:23:55.914808 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:55 crc kubenswrapper[4804]: I0128 11:23:55.914927 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:55 crc kubenswrapper[4804]: I0128 11:23:55.914832 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:55 crc kubenswrapper[4804]: E0128 11:23:55.915031 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:55 crc kubenswrapper[4804]: E0128 11:23:55.915169 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:55 crc kubenswrapper[4804]: E0128 11:23:55.915385 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:56 crc kubenswrapper[4804]: I0128 11:23:56.914439 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:56 crc kubenswrapper[4804]: E0128 11:23:56.914722 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:57 crc kubenswrapper[4804]: I0128 11:23:57.914646 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:57 crc kubenswrapper[4804]: I0128 11:23:57.914748 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:57 crc kubenswrapper[4804]: I0128 11:23:57.914855 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:57 crc kubenswrapper[4804]: E0128 11:23:57.914959 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:57 crc kubenswrapper[4804]: E0128 11:23:57.915345 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:57 crc kubenswrapper[4804]: E0128 11:23:57.915444 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:23:58 crc kubenswrapper[4804]: I0128 11:23:58.931122 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:23:58 crc kubenswrapper[4804]: E0128 11:23:58.931477 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:23:59 crc kubenswrapper[4804]: I0128 11:23:59.914677 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:23:59 crc kubenswrapper[4804]: E0128 11:23:59.914803 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:23:59 crc kubenswrapper[4804]: I0128 11:23:59.915007 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:23:59 crc kubenswrapper[4804]: E0128 11:23:59.915058 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:23:59 crc kubenswrapper[4804]: I0128 11:23:59.915256 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:23:59 crc kubenswrapper[4804]: E0128 11:23:59.915574 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:24:00 crc kubenswrapper[4804]: I0128 11:24:00.486775 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqqmt_735b7edc-6f8b-4f5f-a9ca-11964dd78266/kube-multus/1.log" Jan 28 11:24:00 crc kubenswrapper[4804]: I0128 11:24:00.487329 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqqmt_735b7edc-6f8b-4f5f-a9ca-11964dd78266/kube-multus/0.log" Jan 28 11:24:00 crc kubenswrapper[4804]: I0128 11:24:00.487378 4804 generic.go:334] "Generic (PLEG): container finished" podID="735b7edc-6f8b-4f5f-a9ca-11964dd78266" containerID="888abd8066feec1a58a78cfc0c77f1634db2fc87ed5237703a224ace3d78ee8d" exitCode=1 Jan 28 11:24:00 crc kubenswrapper[4804]: I0128 11:24:00.487419 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqqmt" event={"ID":"735b7edc-6f8b-4f5f-a9ca-11964dd78266","Type":"ContainerDied","Data":"888abd8066feec1a58a78cfc0c77f1634db2fc87ed5237703a224ace3d78ee8d"} Jan 28 11:24:00 crc kubenswrapper[4804]: I0128 11:24:00.487465 4804 scope.go:117] "RemoveContainer" containerID="938d58d34360b5bc34e304e47adc1ea7cd17db7e53148a446987ce137bfb1bb7" Jan 28 11:24:00 crc kubenswrapper[4804]: I0128 11:24:00.488395 4804 scope.go:117] "RemoveContainer" containerID="888abd8066feec1a58a78cfc0c77f1634db2fc87ed5237703a224ace3d78ee8d" Jan 28 11:24:00 crc kubenswrapper[4804]: E0128 11:24:00.488631 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-lqqmt_openshift-multus(735b7edc-6f8b-4f5f-a9ca-11964dd78266)\"" pod="openshift-multus/multus-lqqmt" podUID="735b7edc-6f8b-4f5f-a9ca-11964dd78266" Jan 28 11:24:00 crc kubenswrapper[4804]: I0128 11:24:00.516768 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-n7chj" podStartSLOduration=95.516749358 podStartE2EDuration="1m35.516749358s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:23:33.401699498 +0000 UTC m=+89.196579512" watchObservedRunningTime="2026-01-28 11:24:00.516749358 +0000 UTC m=+116.311629352" Jan 28 11:24:00 crc kubenswrapper[4804]: I0128 11:24:00.914970 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:00 crc kubenswrapper[4804]: E0128 11:24:00.915187 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:24:00 crc kubenswrapper[4804]: I0128 11:24:00.916422 4804 scope.go:117] "RemoveContainer" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:24:00 crc kubenswrapper[4804]: E0128 11:24:00.916682 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-24gvs_openshift-ovn-kubernetes(686039c6-ae16-45ac-bb9f-4c39d57d6c80)\"" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" Jan 28 11:24:01 crc kubenswrapper[4804]: I0128 11:24:01.493434 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqqmt_735b7edc-6f8b-4f5f-a9ca-11964dd78266/kube-multus/1.log" Jan 28 11:24:01 crc kubenswrapper[4804]: I0128 11:24:01.914572 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:01 crc kubenswrapper[4804]: I0128 11:24:01.914644 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:01 crc kubenswrapper[4804]: I0128 11:24:01.914644 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:01 crc kubenswrapper[4804]: E0128 11:24:01.915098 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:24:01 crc kubenswrapper[4804]: E0128 11:24:01.915452 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:24:01 crc kubenswrapper[4804]: E0128 11:24:01.915793 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:24:02 crc kubenswrapper[4804]: I0128 11:24:02.914192 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:02 crc kubenswrapper[4804]: E0128 11:24:02.914397 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:24:03 crc kubenswrapper[4804]: I0128 11:24:03.914923 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:03 crc kubenswrapper[4804]: I0128 11:24:03.914967 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:03 crc kubenswrapper[4804]: I0128 11:24:03.914951 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:03 crc kubenswrapper[4804]: E0128 11:24:03.915140 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:24:03 crc kubenswrapper[4804]: E0128 11:24:03.915365 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:24:03 crc kubenswrapper[4804]: E0128 11:24:03.915486 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:24:04 crc kubenswrapper[4804]: I0128 11:24:04.914773 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:04 crc kubenswrapper[4804]: E0128 11:24:04.916325 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:24:04 crc kubenswrapper[4804]: E0128 11:24:04.939469 4804 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 28 11:24:05 crc kubenswrapper[4804]: E0128 11:24:05.003254 4804 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 11:24:05 crc kubenswrapper[4804]: I0128 11:24:05.914483 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:05 crc kubenswrapper[4804]: I0128 11:24:05.914560 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:05 crc kubenswrapper[4804]: E0128 11:24:05.914620 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:24:05 crc kubenswrapper[4804]: I0128 11:24:05.914560 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:05 crc kubenswrapper[4804]: E0128 11:24:05.914809 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:24:05 crc kubenswrapper[4804]: E0128 11:24:05.914942 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:24:06 crc kubenswrapper[4804]: I0128 11:24:06.914578 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:06 crc kubenswrapper[4804]: E0128 11:24:06.914727 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:24:07 crc kubenswrapper[4804]: I0128 11:24:07.914471 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:07 crc kubenswrapper[4804]: I0128 11:24:07.914571 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:07 crc kubenswrapper[4804]: E0128 11:24:07.914634 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:24:07 crc kubenswrapper[4804]: I0128 11:24:07.914673 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:07 crc kubenswrapper[4804]: E0128 11:24:07.914808 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:24:07 crc kubenswrapper[4804]: E0128 11:24:07.914848 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:24:08 crc kubenswrapper[4804]: I0128 11:24:08.915043 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:08 crc kubenswrapper[4804]: E0128 11:24:08.915253 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:24:09 crc kubenswrapper[4804]: I0128 11:24:09.914728 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:09 crc kubenswrapper[4804]: I0128 11:24:09.914802 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:09 crc kubenswrapper[4804]: I0128 11:24:09.914857 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:09 crc kubenswrapper[4804]: E0128 11:24:09.915074 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:24:09 crc kubenswrapper[4804]: E0128 11:24:09.915186 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:24:09 crc kubenswrapper[4804]: E0128 11:24:09.915386 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:24:10 crc kubenswrapper[4804]: E0128 11:24:10.005100 4804 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 11:24:10 crc kubenswrapper[4804]: I0128 11:24:10.914303 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:10 crc kubenswrapper[4804]: E0128 11:24:10.914587 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:24:11 crc kubenswrapper[4804]: I0128 11:24:11.915087 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:11 crc kubenswrapper[4804]: E0128 11:24:11.915484 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:24:11 crc kubenswrapper[4804]: I0128 11:24:11.915119 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:11 crc kubenswrapper[4804]: I0128 11:24:11.915096 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:11 crc kubenswrapper[4804]: E0128 11:24:11.915606 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:24:11 crc kubenswrapper[4804]: I0128 11:24:11.915614 4804 scope.go:117] "RemoveContainer" containerID="888abd8066feec1a58a78cfc0c77f1634db2fc87ed5237703a224ace3d78ee8d" Jan 28 11:24:11 crc kubenswrapper[4804]: E0128 11:24:11.915866 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:24:12 crc kubenswrapper[4804]: I0128 11:24:12.538765 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqqmt_735b7edc-6f8b-4f5f-a9ca-11964dd78266/kube-multus/1.log" Jan 28 11:24:12 crc kubenswrapper[4804]: I0128 11:24:12.539492 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqqmt" event={"ID":"735b7edc-6f8b-4f5f-a9ca-11964dd78266","Type":"ContainerStarted","Data":"c01bb0098ca9990666b7c354aacae06dac49b570cdc5308064b10a0988abe4cb"} Jan 28 11:24:12 crc kubenswrapper[4804]: I0128 11:24:12.914957 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:12 crc kubenswrapper[4804]: E0128 11:24:12.915197 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:24:13 crc kubenswrapper[4804]: I0128 11:24:13.914597 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:13 crc kubenswrapper[4804]: I0128 11:24:13.914644 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:13 crc kubenswrapper[4804]: I0128 11:24:13.914860 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:13 crc kubenswrapper[4804]: E0128 11:24:13.914844 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:24:13 crc kubenswrapper[4804]: E0128 11:24:13.915441 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:24:13 crc kubenswrapper[4804]: E0128 11:24:13.915340 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:24:14 crc kubenswrapper[4804]: I0128 11:24:14.915107 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:14 crc kubenswrapper[4804]: E0128 11:24:14.916935 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:24:14 crc kubenswrapper[4804]: I0128 11:24:14.917582 4804 scope.go:117] "RemoveContainer" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:24:15 crc kubenswrapper[4804]: E0128 11:24:15.006065 4804 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 11:24:15 crc kubenswrapper[4804]: I0128 11:24:15.554406 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/3.log" Jan 28 11:24:15 crc kubenswrapper[4804]: I0128 11:24:15.557168 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerStarted","Data":"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d"} Jan 28 11:24:15 crc kubenswrapper[4804]: I0128 11:24:15.558256 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:24:15 crc kubenswrapper[4804]: I0128 11:24:15.591707 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podStartSLOduration=110.591681649 podStartE2EDuration="1m50.591681649s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:15.591233945 +0000 UTC m=+131.386113929" watchObservedRunningTime="2026-01-28 11:24:15.591681649 +0000 UTC m=+131.386561643" Jan 28 11:24:15 crc kubenswrapper[4804]: I0128 11:24:15.818933 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bgqd8"] Jan 28 11:24:15 crc kubenswrapper[4804]: I0128 11:24:15.819054 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:15 crc kubenswrapper[4804]: E0128 11:24:15.819133 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:24:15 crc kubenswrapper[4804]: I0128 11:24:15.914796 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:15 crc kubenswrapper[4804]: E0128 11:24:15.915002 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:24:15 crc kubenswrapper[4804]: I0128 11:24:15.914815 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:15 crc kubenswrapper[4804]: E0128 11:24:15.915328 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:24:15 crc kubenswrapper[4804]: I0128 11:24:15.915824 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:15 crc kubenswrapper[4804]: E0128 11:24:15.915971 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:24:17 crc kubenswrapper[4804]: I0128 11:24:17.914368 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:17 crc kubenswrapper[4804]: I0128 11:24:17.914411 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:17 crc kubenswrapper[4804]: I0128 11:24:17.914411 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:17 crc kubenswrapper[4804]: I0128 11:24:17.914475 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:17 crc kubenswrapper[4804]: E0128 11:24:17.914759 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:24:17 crc kubenswrapper[4804]: E0128 11:24:17.915125 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:24:17 crc kubenswrapper[4804]: E0128 11:24:17.915313 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:24:17 crc kubenswrapper[4804]: E0128 11:24:17.915333 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:24:19 crc kubenswrapper[4804]: I0128 11:24:19.914861 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:19 crc kubenswrapper[4804]: E0128 11:24:19.915061 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 28 11:24:19 crc kubenswrapper[4804]: I0128 11:24:19.915115 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:19 crc kubenswrapper[4804]: I0128 11:24:19.915242 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:19 crc kubenswrapper[4804]: I0128 11:24:19.915285 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:19 crc kubenswrapper[4804]: E0128 11:24:19.915282 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-bgqd8" podUID="03844e8b-8d66-4cd7-aa19-51caa1407918" Jan 28 11:24:19 crc kubenswrapper[4804]: E0128 11:24:19.915499 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 28 11:24:19 crc kubenswrapper[4804]: E0128 11:24:19.915602 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 28 11:24:21 crc kubenswrapper[4804]: I0128 11:24:21.914829 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:21 crc kubenswrapper[4804]: I0128 11:24:21.914865 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:21 crc kubenswrapper[4804]: I0128 11:24:21.914961 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:21 crc kubenswrapper[4804]: I0128 11:24:21.914837 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:21 crc kubenswrapper[4804]: I0128 11:24:21.918871 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 11:24:21 crc kubenswrapper[4804]: I0128 11:24:21.918965 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 11:24:21 crc kubenswrapper[4804]: I0128 11:24:21.918960 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 11:24:21 crc kubenswrapper[4804]: I0128 11:24:21.919505 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 11:24:21 crc kubenswrapper[4804]: I0128 11:24:21.919836 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 11:24:21 crc kubenswrapper[4804]: I0128 11:24:21.919932 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 11:24:22 crc kubenswrapper[4804]: I0128 11:24:22.949099 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.348944 4804 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.402474 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4j56"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.402963 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.406419 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-m5p7p"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.407507 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.407631 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mmdfp"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.408757 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.408985 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.409597 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.409727 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.413466 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.414057 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.414409 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.414762 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.415937 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-cljd9"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.416164 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.416554 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-cljd9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.417101 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.417244 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.417404 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.417563 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.417615 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.417690 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-xghdb"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.417811 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.418738 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.420652 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.422745 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vbjk6"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.423710 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.425140 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.426257 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.426935 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.427026 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.427471 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.427656 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.428214 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.428295 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.428588 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.428633 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.429024 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.429339 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.429592 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.429677 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.429830 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.430201 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.430234 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.430393 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.436164 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.436382 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.436622 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.436814 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.436961 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.437094 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.438974 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.439667 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.440036 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pgctg"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.440541 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.440727 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.440809 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.440949 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.442111 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.442516 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.443779 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.444163 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.444365 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.447433 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.447762 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.448014 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.448233 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.449938 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.450584 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.451388 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8hc98"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.452396 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.454499 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.454722 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.454871 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.455181 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.456405 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.456519 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.456652 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.456743 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.456849 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.457346 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.457448 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.457788 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.467905 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6kll7"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.482981 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.492791 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.493497 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.493686 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.493846 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.494013 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.494969 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.495078 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.495259 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.495338 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.495726 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.496122 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.496868 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.497282 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.497287 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.498740 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-src4s"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.498810 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.499185 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.499338 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.499364 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.499615 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.499743 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.499960 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.500211 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.500347 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.500409 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.500537 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.500577 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.500694 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.500846 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.503082 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.503320 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.503648 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.503780 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba221b2c-59ae-4358-9328-2639e1e4e1f9-serving-cert\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.503843 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9p7h\" (UniqueName: \"kubernetes.io/projected/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-kube-api-access-z9p7h\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.503868 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba221b2c-59ae-4358-9328-2639e1e4e1f9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.503909 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-config\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.503928 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.503958 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-auth-proxy-config\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.503999 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba221b2c-59ae-4358-9328-2639e1e4e1f9-config\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.504023 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba221b2c-59ae-4358-9328-2639e1e4e1f9-service-ca-bundle\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.504069 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-machine-approver-tls\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.504105 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8p5tg\" (UniqueName: \"kubernetes.io/projected/ba221b2c-59ae-4358-9328-2639e1e4e1f9-kube-api-access-8p5tg\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.504217 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.505643 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.508796 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.509412 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.509989 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.513227 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.513387 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.513442 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.513727 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.513950 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.514057 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.514079 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.513400 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.517306 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.520607 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.521010 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.521247 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.521472 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.521855 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.527233 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.527691 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-h44hn"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.527819 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.528031 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.528194 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.528295 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.528421 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.528521 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.528586 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.529133 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-44lsd"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.529464 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.529505 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.529696 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.530171 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.530404 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.531700 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.533584 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.539815 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4j56"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.539990 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.541968 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.553793 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.561014 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gsq9d"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.561854 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.566022 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.580260 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-slcp9"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.581967 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.582180 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.582595 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.583840 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.585838 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.586532 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.586642 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.588816 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.590654 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.591620 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ml79j"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.592164 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.592272 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.592484 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.593025 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.593809 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.594834 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.595353 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.595834 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.597408 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.598605 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-slln9"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.600533 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.601768 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.602631 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.604129 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mmdfp"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606264 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606419 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8qf5z\" (UniqueName: \"kubernetes.io/projected/61387edd-4fc9-4cb7-8229-a6578d2d15fb-kube-api-access-8qf5z\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606484 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/46da2b10-cba3-46fa-a2f3-972499966fd3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4wpb6\" (UID: \"46da2b10-cba3-46fa-a2f3-972499966fd3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606631 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-image-import-ca\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606679 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf33f13a-5328-47e6-8e14-1c0a84927117-metrics-certs\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606721 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8p5tg\" (UniqueName: \"kubernetes.io/projected/ba221b2c-59ae-4358-9328-2639e1e4e1f9-kube-api-access-8p5tg\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606744 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a74db24-5aca-48f9-889c-e37d8cdba99e-config\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606765 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7k9g\" (UniqueName: \"kubernetes.io/projected/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-kube-api-access-x7k9g\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606790 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-console-config\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606807 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-client-ca\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606836 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61b65dc4-6aaf-4578-adf4-64759773196a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-66fn9\" (UID: \"61b65dc4-6aaf-4578-adf4-64759773196a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.606863 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/881a5709-4ff6-448e-ba75-caf5f7e61a5b-encryption-config\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607093 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4jch\" (UniqueName: \"kubernetes.io/projected/e2b8b707-60c9-4138-a4d8-d218162737fe-kube-api-access-l4jch\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607116 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/881a5709-4ff6-448e-ba75-caf5f7e61a5b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607142 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/881a5709-4ff6-448e-ba75-caf5f7e61a5b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607160 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1a74db24-5aca-48f9-889c-e37d8cdba99e-etcd-ca\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607177 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-oauth-serving-cert\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607201 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-client-ca\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607222 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f90e352-ac01-40fb-bf8d-50500206f0ac-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607241 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/881a5709-4ff6-448e-ba75-caf5f7e61a5b-audit-dir\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607260 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74dlk\" (UniqueName: \"kubernetes.io/projected/d56b6530-c7d7-432d-bd5e-1a07a2d94515-kube-api-access-74dlk\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607279 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1a74db24-5aca-48f9-889c-e37d8cdba99e-etcd-service-ca\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607301 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-etcd-serving-ca\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607323 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607338 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a74db24-5aca-48f9-889c-e37d8cdba99e-serving-cert\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607358 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65cbbd20-6185-455b-814b-7de34194ec29-audit-dir\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607377 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61387edd-4fc9-4cb7-8229-a6578d2d15fb-trusted-ca\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607413 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba221b2c-59ae-4358-9328-2639e1e4e1f9-serving-cert\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607434 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpcsj\" (UniqueName: \"kubernetes.io/projected/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-kube-api-access-dpcsj\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607458 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba221b2c-59ae-4358-9328-2639e1e4e1f9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607479 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6mnd\" (UniqueName: \"kubernetes.io/projected/9d40e6f6-2a67-4ec3-a612-77c2f9f6517d-kube-api-access-n6mnd\") pod \"kube-storage-version-migrator-operator-b67b599dd-mcdcv\" (UID: \"9d40e6f6-2a67-4ec3-a612-77c2f9f6517d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607499 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mp29h\" (UniqueName: \"kubernetes.io/projected/43de728c-beeb-4fde-832b-dcf5097867e0-kube-api-access-mp29h\") pod \"dns-default-slcp9\" (UID: \"43de728c-beeb-4fde-832b-dcf5097867e0\") " pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607525 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-config\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607546 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-oauth-config\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607561 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-trusted-ca-bundle\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607575 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65cbbd20-6185-455b-814b-7de34194ec29-serving-cert\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607593 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2b8b707-60c9-4138-a4d8-d218162737fe-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607613 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-key\") pod \"service-ca-9c57cc56f-slln9\" (UID: \"9ad95836-c587-4ca7-b5fa-f878af1019b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607683 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7b4g\" (UniqueName: \"kubernetes.io/projected/625b312d-62b0-4965-966c-3605f4d649a4-kube-api-access-q7b4g\") pod \"dns-operator-744455d44c-mmdfp\" (UID: \"625b312d-62b0-4965-966c-3605f4d649a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607703 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/881a5709-4ff6-448e-ba75-caf5f7e61a5b-audit-policies\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607726 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-config\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607747 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dce007c-8b8d-4271-bb40-7482176fc529-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vdpgq\" (UID: \"2dce007c-8b8d-4271-bb40-7482176fc529\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607767 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-service-ca\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607790 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbhxq\" (UniqueName: \"kubernetes.io/projected/61b65dc4-6aaf-4578-adf4-64759773196a-kube-api-access-cbhxq\") pod \"openshift-apiserver-operator-796bbdcf4f-66fn9\" (UID: \"61b65dc4-6aaf-4578-adf4-64759773196a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607806 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/625b312d-62b0-4965-966c-3605f4d649a4-metrics-tls\") pod \"dns-operator-744455d44c-mmdfp\" (UID: \"625b312d-62b0-4965-966c-3605f4d649a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607824 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61387edd-4fc9-4cb7-8229-a6578d2d15fb-serving-cert\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607914 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f90e352-ac01-40fb-bf8d-50500206f0ac-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607936 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-proxy-tls\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.607955 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf33f13a-5328-47e6-8e14-1c0a84927117-service-ca-bundle\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608145 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9k62\" (UniqueName: \"kubernetes.io/projected/0f90e352-ac01-40fb-bf8d-50500206f0ac-kube-api-access-b9k62\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608178 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/57150906-6899-4d65-b5e5-5092215695b7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g8nn2\" (UID: \"57150906-6899-4d65-b5e5-5092215695b7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608197 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-cabundle\") pod \"service-ca-9c57cc56f-slln9\" (UID: \"9ad95836-c587-4ca7-b5fa-f878af1019b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608245 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-auth-proxy-config\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608284 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba221b2c-59ae-4358-9328-2639e1e4e1f9-service-ca-bundle\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608441 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q848q\" (UniqueName: \"kubernetes.io/projected/4e425cf1-0352-47be-9c58-2bad27ccc3c1-kube-api-access-q848q\") pod \"downloads-7954f5f757-cljd9\" (UID: \"4e425cf1-0352-47be-9c58-2bad27ccc3c1\") " pod="openshift-console/downloads-7954f5f757-cljd9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608502 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtt9h\" (UniqueName: \"kubernetes.io/projected/bf13c867-7c3e-4845-a6c8-f25700c31666-kube-api-access-dtt9h\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608532 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kmtb8\" (UniqueName: \"kubernetes.io/projected/ffe68ef2-471a-42e3-a825-f90c8a5f6028-kube-api-access-kmtb8\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5n98\" (UID: \"ffe68ef2-471a-42e3-a825-f90c8a5f6028\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608556 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-serving-cert\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608585 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61b65dc4-6aaf-4578-adf4-64759773196a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-66fn9\" (UID: \"61b65dc4-6aaf-4578-adf4-64759773196a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608606 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/65cbbd20-6185-455b-814b-7de34194ec29-etcd-client\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.608875 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48l2b\" (UniqueName: \"kubernetes.io/projected/65cbbd20-6185-455b-814b-7de34194ec29-kube-api-access-48l2b\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609137 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-config\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609155 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-config\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609182 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cf33f13a-5328-47e6-8e14-1c0a84927117-stats-auth\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609211 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cf33f13a-5328-47e6-8e14-1c0a84927117-default-certificate\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609272 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-machine-approver-tls\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609293 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61387edd-4fc9-4cb7-8229-a6578d2d15fb-config\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609327 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f90e352-ac01-40fb-bf8d-50500206f0ac-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609353 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab667a9d-5e0b-4faa-909e-5f778579e853-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-47d82\" (UID: \"ab667a9d-5e0b-4faa-909e-5f778579e853\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609495 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-auth-proxy-config\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609912 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d56b6530-c7d7-432d-bd5e-1a07a2d94515-serving-cert\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609940 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffe68ef2-471a-42e3-a825-f90c8a5f6028-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5n98\" (UID: \"ffe68ef2-471a-42e3-a825-f90c8a5f6028\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.609962 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfbh8\" (UniqueName: \"kubernetes.io/projected/cf33f13a-5328-47e6-8e14-1c0a84927117-kube-api-access-tfbh8\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610018 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dce007c-8b8d-4271-bb40-7482176fc529-config\") pod \"kube-controller-manager-operator-78b949d7b-vdpgq\" (UID: \"2dce007c-8b8d-4271-bb40-7482176fc529\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610037 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1a74db24-5aca-48f9-889c-e37d8cdba99e-etcd-client\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610076 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-images\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610155 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj7lz\" (UniqueName: \"kubernetes.io/projected/ab667a9d-5e0b-4faa-909e-5f778579e853-kube-api-access-lj7lz\") pod \"package-server-manager-789f6589d5-47d82\" (UID: \"ab667a9d-5e0b-4faa-909e-5f778579e853\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610287 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57150906-6899-4d65-b5e5-5092215695b7-serving-cert\") pod \"openshift-config-operator-7777fb866f-g8nn2\" (UID: \"57150906-6899-4d65-b5e5-5092215695b7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610326 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/65cbbd20-6185-455b-814b-7de34194ec29-node-pullsecrets\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610362 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610381 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43de728c-beeb-4fde-832b-dcf5097867e0-config-volume\") pod \"dns-default-slcp9\" (UID: \"43de728c-beeb-4fde-832b-dcf5097867e0\") " pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610404 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/65cbbd20-6185-455b-814b-7de34194ec29-encryption-config\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610431 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-serving-cert\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610462 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z9p7h\" (UniqueName: \"kubernetes.io/projected/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-kube-api-access-z9p7h\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610482 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwdbv\" (UniqueName: \"kubernetes.io/projected/1a74db24-5aca-48f9-889c-e37d8cdba99e-kube-api-access-fwdbv\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610501 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-audit\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610542 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d40e6f6-2a67-4ec3-a612-77c2f9f6517d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mcdcv\" (UID: \"9d40e6f6-2a67-4ec3-a612-77c2f9f6517d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610588 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dce007c-8b8d-4271-bb40-7482176fc529-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vdpgq\" (UID: \"2dce007c-8b8d-4271-bb40-7482176fc529\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610621 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/881a5709-4ff6-448e-ba75-caf5f7e61a5b-etcd-client\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610646 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d40e6f6-2a67-4ec3-a612-77c2f9f6517d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mcdcv\" (UID: \"9d40e6f6-2a67-4ec3-a612-77c2f9f6517d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610677 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8q76\" (UniqueName: \"kubernetes.io/projected/57150906-6899-4d65-b5e5-5092215695b7-kube-api-access-w8q76\") pod \"openshift-config-operator-7777fb866f-g8nn2\" (UID: \"57150906-6899-4d65-b5e5-5092215695b7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610705 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e2b8b707-60c9-4138-a4d8-d218162737fe-images\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610732 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/881a5709-4ff6-448e-ba75-caf5f7e61a5b-serving-cert\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610761 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4rm6\" (UniqueName: \"kubernetes.io/projected/881a5709-4ff6-448e-ba75-caf5f7e61a5b-kube-api-access-p4rm6\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610790 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-config\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610814 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffe68ef2-471a-42e3-a825-f90c8a5f6028-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5n98\" (UID: \"ffe68ef2-471a-42e3-a825-f90c8a5f6028\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610841 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610869 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2b8b707-60c9-4138-a4d8-d218162737fe-config\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610914 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64g4r\" (UniqueName: \"kubernetes.io/projected/46da2b10-cba3-46fa-a2f3-972499966fd3-kube-api-access-64g4r\") pod \"cluster-samples-operator-665b6dd947-4wpb6\" (UID: \"46da2b10-cba3-46fa-a2f3-972499966fd3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610956 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba221b2c-59ae-4358-9328-2639e1e4e1f9-config\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.610979 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/43de728c-beeb-4fde-832b-dcf5097867e0-metrics-tls\") pod \"dns-default-slcp9\" (UID: \"43de728c-beeb-4fde-832b-dcf5097867e0\") " pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.611000 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2pzm\" (UniqueName: \"kubernetes.io/projected/9ad95836-c587-4ca7-b5fa-f878af1019b6-kube-api-access-v2pzm\") pod \"service-ca-9c57cc56f-slln9\" (UID: \"9ad95836-c587-4ca7-b5fa-f878af1019b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.612183 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ba221b2c-59ae-4358-9328-2639e1e4e1f9-config\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.612623 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba221b2c-59ae-4358-9328-2639e1e4e1f9-service-ca-bundle\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.612923 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ba221b2c-59ae-4358-9328-2639e1e4e1f9-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.613343 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-src4s"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.614960 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.615074 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ba221b2c-59ae-4358-9328-2639e1e4e1f9-serving-cert\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.618531 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pgctg"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.618676 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vbjk6"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.618766 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-xghdb"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.623147 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.623191 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.624549 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-machine-approver-tls\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.627129 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.629556 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.630120 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.630314 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.633016 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.635649 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.636774 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.638772 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-m5p7p"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.641650 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6kll7"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.644562 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.646105 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.646525 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.648719 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.650485 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.652781 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.654495 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-44lsd"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.656516 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.658414 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-vc78g"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.661839 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vc78g" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.665843 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.670300 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-97kr8"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.672828 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.672951 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gsq9d"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.673044 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-97kr8" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.674525 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.679439 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vc78g"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.681537 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8hc98"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.683222 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-slln9"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.685016 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.685332 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.687094 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ml79j"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.689056 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.690763 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-cljd9"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.692692 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-slcp9"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.694458 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.696651 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.698274 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.699807 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qj7pb"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.701745 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qj7pb"] Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.701766 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.705539 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712068 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-images\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712103 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/65cbbd20-6185-455b-814b-7de34194ec29-node-pullsecrets\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712123 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712144 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43de728c-beeb-4fde-832b-dcf5097867e0-config-volume\") pod \"dns-default-slcp9\" (UID: \"43de728c-beeb-4fde-832b-dcf5097867e0\") " pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712165 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj7lz\" (UniqueName: \"kubernetes.io/projected/ab667a9d-5e0b-4faa-909e-5f778579e853-kube-api-access-lj7lz\") pod \"package-server-manager-789f6589d5-47d82\" (UID: \"ab667a9d-5e0b-4faa-909e-5f778579e853\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712186 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57150906-6899-4d65-b5e5-5092215695b7-serving-cert\") pod \"openshift-config-operator-7777fb866f-g8nn2\" (UID: \"57150906-6899-4d65-b5e5-5092215695b7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712207 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/65cbbd20-6185-455b-814b-7de34194ec29-encryption-config\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712226 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-serving-cert\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712245 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/65cbbd20-6185-455b-814b-7de34194ec29-node-pullsecrets\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712259 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fwdbv\" (UniqueName: \"kubernetes.io/projected/1a74db24-5aca-48f9-889c-e37d8cdba99e-kube-api-access-fwdbv\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712338 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-audit\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712376 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d40e6f6-2a67-4ec3-a612-77c2f9f6517d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mcdcv\" (UID: \"9d40e6f6-2a67-4ec3-a612-77c2f9f6517d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712407 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dce007c-8b8d-4271-bb40-7482176fc529-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vdpgq\" (UID: \"2dce007c-8b8d-4271-bb40-7482176fc529\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712437 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/881a5709-4ff6-448e-ba75-caf5f7e61a5b-etcd-client\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712463 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d40e6f6-2a67-4ec3-a612-77c2f9f6517d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mcdcv\" (UID: \"9d40e6f6-2a67-4ec3-a612-77c2f9f6517d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712525 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e2b8b707-60c9-4138-a4d8-d218162737fe-images\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712613 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8q76\" (UniqueName: \"kubernetes.io/projected/57150906-6899-4d65-b5e5-5092215695b7-kube-api-access-w8q76\") pod \"openshift-config-operator-7777fb866f-g8nn2\" (UID: \"57150906-6899-4d65-b5e5-5092215695b7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712647 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/881a5709-4ff6-448e-ba75-caf5f7e61a5b-serving-cert\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712701 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-config\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712730 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffe68ef2-471a-42e3-a825-f90c8a5f6028-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5n98\" (UID: \"ffe68ef2-471a-42e3-a825-f90c8a5f6028\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712761 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4rm6\" (UniqueName: \"kubernetes.io/projected/881a5709-4ff6-448e-ba75-caf5f7e61a5b-kube-api-access-p4rm6\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712798 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712827 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2b8b707-60c9-4138-a4d8-d218162737fe-config\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712858 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64g4r\" (UniqueName: \"kubernetes.io/projected/46da2b10-cba3-46fa-a2f3-972499966fd3-kube-api-access-64g4r\") pod \"cluster-samples-operator-665b6dd947-4wpb6\" (UID: \"46da2b10-cba3-46fa-a2f3-972499966fd3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712917 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/43de728c-beeb-4fde-832b-dcf5097867e0-metrics-tls\") pod \"dns-default-slcp9\" (UID: \"43de728c-beeb-4fde-832b-dcf5097867e0\") " pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712946 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2pzm\" (UniqueName: \"kubernetes.io/projected/9ad95836-c587-4ca7-b5fa-f878af1019b6-kube-api-access-v2pzm\") pod \"service-ca-9c57cc56f-slln9\" (UID: \"9ad95836-c587-4ca7-b5fa-f878af1019b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.712983 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8qf5z\" (UniqueName: \"kubernetes.io/projected/61387edd-4fc9-4cb7-8229-a6578d2d15fb-kube-api-access-8qf5z\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713012 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/46da2b10-cba3-46fa-a2f3-972499966fd3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4wpb6\" (UID: \"46da2b10-cba3-46fa-a2f3-972499966fd3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713046 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-image-import-ca\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713074 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf33f13a-5328-47e6-8e14-1c0a84927117-metrics-certs\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713113 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a74db24-5aca-48f9-889c-e37d8cdba99e-config\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713141 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x7k9g\" (UniqueName: \"kubernetes.io/projected/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-kube-api-access-x7k9g\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713173 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-client-ca\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713200 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-console-config\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713228 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61b65dc4-6aaf-4578-adf4-64759773196a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-66fn9\" (UID: \"61b65dc4-6aaf-4578-adf4-64759773196a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713261 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/881a5709-4ff6-448e-ba75-caf5f7e61a5b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713290 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/881a5709-4ff6-448e-ba75-caf5f7e61a5b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713317 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/881a5709-4ff6-448e-ba75-caf5f7e61a5b-encryption-config\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713347 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4jch\" (UniqueName: \"kubernetes.io/projected/e2b8b707-60c9-4138-a4d8-d218162737fe-kube-api-access-l4jch\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713382 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-client-ca\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713408 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1a74db24-5aca-48f9-889c-e37d8cdba99e-etcd-ca\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713437 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-oauth-serving-cert\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713472 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f90e352-ac01-40fb-bf8d-50500206f0ac-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713523 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/881a5709-4ff6-448e-ba75-caf5f7e61a5b-audit-dir\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713553 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-74dlk\" (UniqueName: \"kubernetes.io/projected/d56b6530-c7d7-432d-bd5e-1a07a2d94515-kube-api-access-74dlk\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713586 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713613 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a74db24-5aca-48f9-889c-e37d8cdba99e-serving-cert\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713640 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1a74db24-5aca-48f9-889c-e37d8cdba99e-etcd-service-ca\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713668 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-etcd-serving-ca\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713701 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpcsj\" (UniqueName: \"kubernetes.io/projected/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-kube-api-access-dpcsj\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713729 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65cbbd20-6185-455b-814b-7de34194ec29-audit-dir\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713758 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61387edd-4fc9-4cb7-8229-a6578d2d15fb-trusted-ca\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713784 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-auth-proxy-config\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713808 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6mnd\" (UniqueName: \"kubernetes.io/projected/9d40e6f6-2a67-4ec3-a612-77c2f9f6517d-kube-api-access-n6mnd\") pod \"kube-storage-version-migrator-operator-b67b599dd-mcdcv\" (UID: \"9d40e6f6-2a67-4ec3-a612-77c2f9f6517d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713841 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-oauth-config\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713870 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-trusted-ca-bundle\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713918 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mp29h\" (UniqueName: \"kubernetes.io/projected/43de728c-beeb-4fde-832b-dcf5097867e0-kube-api-access-mp29h\") pod \"dns-default-slcp9\" (UID: \"43de728c-beeb-4fde-832b-dcf5097867e0\") " pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713952 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7b4g\" (UniqueName: \"kubernetes.io/projected/625b312d-62b0-4965-966c-3605f4d649a4-kube-api-access-q7b4g\") pod \"dns-operator-744455d44c-mmdfp\" (UID: \"625b312d-62b0-4965-966c-3605f4d649a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.713981 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/881a5709-4ff6-448e-ba75-caf5f7e61a5b-audit-policies\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714012 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65cbbd20-6185-455b-814b-7de34194ec29-serving-cert\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714040 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2b8b707-60c9-4138-a4d8-d218162737fe-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714071 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-key\") pod \"service-ca-9c57cc56f-slln9\" (UID: \"9ad95836-c587-4ca7-b5fa-f878af1019b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714100 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dce007c-8b8d-4271-bb40-7482176fc529-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vdpgq\" (UID: \"2dce007c-8b8d-4271-bb40-7482176fc529\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714124 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-service-ca\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714153 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-config\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714181 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f90e352-ac01-40fb-bf8d-50500206f0ac-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714209 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbhxq\" (UniqueName: \"kubernetes.io/projected/61b65dc4-6aaf-4578-adf4-64759773196a-kube-api-access-cbhxq\") pod \"openshift-apiserver-operator-796bbdcf4f-66fn9\" (UID: \"61b65dc4-6aaf-4578-adf4-64759773196a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714235 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/625b312d-62b0-4965-966c-3605f4d649a4-metrics-tls\") pod \"dns-operator-744455d44c-mmdfp\" (UID: \"625b312d-62b0-4965-966c-3605f4d649a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714259 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61387edd-4fc9-4cb7-8229-a6578d2d15fb-serving-cert\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714281 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-config\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714297 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-proxy-tls\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714350 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf33f13a-5328-47e6-8e14-1c0a84927117-service-ca-bundle\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714393 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9k62\" (UniqueName: \"kubernetes.io/projected/0f90e352-ac01-40fb-bf8d-50500206f0ac-kube-api-access-b9k62\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714417 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/57150906-6899-4d65-b5e5-5092215695b7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g8nn2\" (UID: \"57150906-6899-4d65-b5e5-5092215695b7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714426 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/e2b8b707-60c9-4138-a4d8-d218162737fe-images\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714438 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-cabundle\") pod \"service-ca-9c57cc56f-slln9\" (UID: \"9ad95836-c587-4ca7-b5fa-f878af1019b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714506 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtt9h\" (UniqueName: \"kubernetes.io/projected/bf13c867-7c3e-4845-a6c8-f25700c31666-kube-api-access-dtt9h\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714532 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/65cbbd20-6185-455b-814b-7de34194ec29-audit-dir\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714540 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kmtb8\" (UniqueName: \"kubernetes.io/projected/ffe68ef2-471a-42e3-a825-f90c8a5f6028-kube-api-access-kmtb8\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5n98\" (UID: \"ffe68ef2-471a-42e3-a825-f90c8a5f6028\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714567 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q848q\" (UniqueName: \"kubernetes.io/projected/4e425cf1-0352-47be-9c58-2bad27ccc3c1-kube-api-access-q848q\") pod \"downloads-7954f5f757-cljd9\" (UID: \"4e425cf1-0352-47be-9c58-2bad27ccc3c1\") " pod="openshift-console/downloads-7954f5f757-cljd9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714612 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-serving-cert\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714643 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61b65dc4-6aaf-4578-adf4-64759773196a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-66fn9\" (UID: \"61b65dc4-6aaf-4578-adf4-64759773196a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714669 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/65cbbd20-6185-455b-814b-7de34194ec29-etcd-client\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714699 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cf33f13a-5328-47e6-8e14-1c0a84927117-stats-auth\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714724 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48l2b\" (UniqueName: \"kubernetes.io/projected/65cbbd20-6185-455b-814b-7de34194ec29-kube-api-access-48l2b\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714749 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-config\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714777 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61387edd-4fc9-4cb7-8229-a6578d2d15fb-config\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714791 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-etcd-serving-ca\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714803 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cf33f13a-5328-47e6-8e14-1c0a84927117-default-certificate\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714921 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f90e352-ac01-40fb-bf8d-50500206f0ac-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714966 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab667a9d-5e0b-4faa-909e-5f778579e853-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-47d82\" (UID: \"ab667a9d-5e0b-4faa-909e-5f778579e853\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.715014 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfbh8\" (UniqueName: \"kubernetes.io/projected/cf33f13a-5328-47e6-8e14-1c0a84927117-kube-api-access-tfbh8\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.715043 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d56b6530-c7d7-432d-bd5e-1a07a2d94515-serving-cert\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.715072 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffe68ef2-471a-42e3-a825-f90c8a5f6028-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5n98\" (UID: \"ffe68ef2-471a-42e3-a825-f90c8a5f6028\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.715107 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dce007c-8b8d-4271-bb40-7482176fc529-config\") pod \"kube-controller-manager-operator-78b949d7b-vdpgq\" (UID: \"2dce007c-8b8d-4271-bb40-7482176fc529\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.715138 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1a74db24-5aca-48f9-889c-e37d8cdba99e-etcd-client\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.715428 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-trusted-ca-bundle\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.715779 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e2b8b707-60c9-4138-a4d8-d218162737fe-config\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714284 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-audit\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.715915 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/57150906-6899-4d65-b5e5-5092215695b7-available-featuregates\") pod \"openshift-config-operator-7777fb866f-g8nn2\" (UID: \"57150906-6899-4d65-b5e5-5092215695b7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.715977 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/61387edd-4fc9-4cb7-8229-a6578d2d15fb-trusted-ca\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.716427 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1a74db24-5aca-48f9-889c-e37d8cdba99e-config\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.714773 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ffe68ef2-471a-42e3-a825-f90c8a5f6028-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5n98\" (UID: \"ffe68ef2-471a-42e3-a825-f90c8a5f6028\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.717445 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-client-ca\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.717837 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/881a5709-4ff6-448e-ba75-caf5f7e61a5b-audit-dir\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.717994 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/881a5709-4ff6-448e-ba75-caf5f7e61a5b-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.718190 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/65cbbd20-6185-455b-814b-7de34194ec29-image-import-ca\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.718225 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/881a5709-4ff6-448e-ba75-caf5f7e61a5b-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.718159 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61b65dc4-6aaf-4578-adf4-64759773196a-config\") pod \"openshift-apiserver-operator-796bbdcf4f-66fn9\" (UID: \"61b65dc4-6aaf-4578-adf4-64759773196a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.718371 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/1a74db24-5aca-48f9-889c-e37d8cdba99e-etcd-ca\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.718480 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61387edd-4fc9-4cb7-8229-a6578d2d15fb-config\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.718647 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-client-ca\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.718715 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/1a74db24-5aca-48f9-889c-e37d8cdba99e-etcd-service-ca\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.719087 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/881a5709-4ff6-448e-ba75-caf5f7e61a5b-audit-policies\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.719230 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-config\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.719290 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dce007c-8b8d-4271-bb40-7482176fc529-config\") pod \"kube-controller-manager-operator-78b949d7b-vdpgq\" (UID: \"2dce007c-8b8d-4271-bb40-7482176fc529\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.719501 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-serving-cert\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.719628 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.720380 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-config\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.720526 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/881a5709-4ff6-448e-ba75-caf5f7e61a5b-etcd-client\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.720568 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/0f90e352-ac01-40fb-bf8d-50500206f0ac-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.720968 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-serving-cert\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.721371 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-oauth-config\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.721764 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1a74db24-5aca-48f9-889c-e37d8cdba99e-etcd-client\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.722222 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/57150906-6899-4d65-b5e5-5092215695b7-serving-cert\") pod \"openshift-config-operator-7777fb866f-g8nn2\" (UID: \"57150906-6899-4d65-b5e5-5092215695b7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.722613 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ffe68ef2-471a-42e3-a825-f90c8a5f6028-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5n98\" (UID: \"ffe68ef2-471a-42e3-a825-f90c8a5f6028\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.722802 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61b65dc4-6aaf-4578-adf4-64759773196a-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-66fn9\" (UID: \"61b65dc4-6aaf-4578-adf4-64759773196a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.722849 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/625b312d-62b0-4965-966c-3605f4d649a4-metrics-tls\") pod \"dns-operator-744455d44c-mmdfp\" (UID: \"625b312d-62b0-4965-966c-3605f4d649a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.723163 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1a74db24-5aca-48f9-889c-e37d8cdba99e-serving-cert\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.723433 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-oauth-serving-cert\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.723673 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/881a5709-4ff6-448e-ba75-caf5f7e61a5b-encryption-config\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.723747 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-trusted-ca-bundle\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.723822 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/65cbbd20-6185-455b-814b-7de34194ec29-encryption-config\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.724183 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d56b6530-c7d7-432d-bd5e-1a07a2d94515-serving-cert\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.724460 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-console-config\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.724521 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/0f90e352-ac01-40fb-bf8d-50500206f0ac-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.724610 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/65cbbd20-6185-455b-814b-7de34194ec29-etcd-client\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.724769 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-service-ca\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.724950 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/65cbbd20-6185-455b-814b-7de34194ec29-serving-cert\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.725541 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dce007c-8b8d-4271-bb40-7482176fc529-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-vdpgq\" (UID: \"2dce007c-8b8d-4271-bb40-7482176fc529\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.725651 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.726418 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61387edd-4fc9-4cb7-8229-a6578d2d15fb-serving-cert\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.726585 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/e2b8b707-60c9-4138-a4d8-d218162737fe-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.727297 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/881a5709-4ff6-448e-ba75-caf5f7e61a5b-serving-cert\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.747957 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.765837 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.778019 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/cf33f13a-5328-47e6-8e14-1c0a84927117-metrics-certs\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.786355 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.795678 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cf33f13a-5328-47e6-8e14-1c0a84927117-service-ca-bundle\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.805955 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.827629 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.841613 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/cf33f13a-5328-47e6-8e14-1c0a84927117-stats-auth\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.846351 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.866131 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.881013 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/cf33f13a-5328-47e6-8e14-1c0a84927117-default-certificate\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.885931 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.905964 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.926005 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.944847 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.965431 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 11:24:23 crc kubenswrapper[4804]: I0128 11:24:23.985697 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.004356 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.025609 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.045988 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.066404 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.090249 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.105681 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.126129 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.144704 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.184685 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.186315 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.211604 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.224950 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.246300 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.265539 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.285588 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.290027 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/46da2b10-cba3-46fa-a2f3-972499966fd3-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-4wpb6\" (UID: \"46da2b10-cba3-46fa-a2f3-972499966fd3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.305714 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.326105 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.345698 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.365276 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.385130 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.405585 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.417523 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d40e6f6-2a67-4ec3-a612-77c2f9f6517d-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mcdcv\" (UID: \"9d40e6f6-2a67-4ec3-a612-77c2f9f6517d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.424890 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.445434 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.454823 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d40e6f6-2a67-4ec3-a612-77c2f9f6517d-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mcdcv\" (UID: \"9d40e6f6-2a67-4ec3-a612-77c2f9f6517d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.467505 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.486766 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.505677 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.526598 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.566097 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.573619 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/43de728c-beeb-4fde-832b-dcf5097867e0-config-volume\") pod \"dns-default-slcp9\" (UID: \"43de728c-beeb-4fde-832b-dcf5097867e0\") " pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.584031 4804 request.go:700] Waited for 1.001560987s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-dns/secrets?fieldSelector=metadata.name%3Ddns-dockercfg-jwfmh&limit=500&resourceVersion=0 Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.586241 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.606483 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.624620 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/43de728c-beeb-4fde-832b-dcf5097867e0-metrics-tls\") pod \"dns-default-slcp9\" (UID: \"43de728c-beeb-4fde-832b-dcf5097867e0\") " pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.626954 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.655634 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.670581 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.689754 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.707128 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 11:24:24 crc kubenswrapper[4804]: E0128 11:24:24.712435 4804 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Jan 28 11:24:24 crc kubenswrapper[4804]: E0128 11:24:24.712583 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-images podName:a7c281fd-3e5a-4edc-98f7-8703c1f08aab nodeName:}" failed. No retries permitted until 2026-01-28 11:24:25.212549487 +0000 UTC m=+141.007429481 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-images") pod "machine-config-operator-74547568cd-6g5ff" (UID: "a7c281fd-3e5a-4edc-98f7-8703c1f08aab") : failed to sync configmap cache: timed out waiting for the condition Jan 28 11:24:24 crc kubenswrapper[4804]: E0128 11:24:24.714570 4804 secret.go:188] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Jan 28 11:24:24 crc kubenswrapper[4804]: E0128 11:24:24.714617 4804 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 28 11:24:24 crc kubenswrapper[4804]: E0128 11:24:24.714640 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-proxy-tls podName:a7c281fd-3e5a-4edc-98f7-8703c1f08aab nodeName:}" failed. No retries permitted until 2026-01-28 11:24:25.214626745 +0000 UTC m=+141.009506739 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-proxy-tls") pod "machine-config-operator-74547568cd-6g5ff" (UID: "a7c281fd-3e5a-4edc-98f7-8703c1f08aab") : failed to sync secret cache: timed out waiting for the condition Jan 28 11:24:24 crc kubenswrapper[4804]: E0128 11:24:24.714671 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-cabundle podName:9ad95836-c587-4ca7-b5fa-f878af1019b6 nodeName:}" failed. No retries permitted until 2026-01-28 11:24:25.214651895 +0000 UTC m=+141.009531899 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-cabundle") pod "service-ca-9c57cc56f-slln9" (UID: "9ad95836-c587-4ca7-b5fa-f878af1019b6") : failed to sync configmap cache: timed out waiting for the condition Jan 28 11:24:24 crc kubenswrapper[4804]: E0128 11:24:24.718873 4804 secret.go:188] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 28 11:24:24 crc kubenswrapper[4804]: E0128 11:24:24.719006 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab667a9d-5e0b-4faa-909e-5f778579e853-package-server-manager-serving-cert podName:ab667a9d-5e0b-4faa-909e-5f778579e853 nodeName:}" failed. No retries permitted until 2026-01-28 11:24:25.218973278 +0000 UTC m=+141.013853282 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/ab667a9d-5e0b-4faa-909e-5f778579e853-package-server-manager-serving-cert") pod "package-server-manager-789f6589d5-47d82" (UID: "ab667a9d-5e0b-4faa-909e-5f778579e853") : failed to sync secret cache: timed out waiting for the condition Jan 28 11:24:24 crc kubenswrapper[4804]: E0128 11:24:24.719959 4804 secret.go:188] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 28 11:24:24 crc kubenswrapper[4804]: E0128 11:24:24.720192 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-key podName:9ad95836-c587-4ca7-b5fa-f878af1019b6 nodeName:}" failed. No retries permitted until 2026-01-28 11:24:25.220162776 +0000 UTC m=+141.015042930 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-key") pod "service-ca-9c57cc56f-slln9" (UID: "9ad95836-c587-4ca7-b5fa-f878af1019b6") : failed to sync secret cache: timed out waiting for the condition Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.727044 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.745917 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.765526 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.784504 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.805851 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.827310 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.852593 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.865715 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.886118 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.906409 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.926551 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.946255 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.966743 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 11:24:24 crc kubenswrapper[4804]: I0128 11:24:24.986034 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.005645 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.024826 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.045509 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.065108 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.086029 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.106411 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.125945 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.145743 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.166232 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.185932 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.205794 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.225466 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.241653 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-key\") pod \"service-ca-9c57cc56f-slln9\" (UID: \"9ad95836-c587-4ca7-b5fa-f878af1019b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.241714 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-proxy-tls\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.241743 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-cabundle\") pod \"service-ca-9c57cc56f-slln9\" (UID: \"9ad95836-c587-4ca7-b5fa-f878af1019b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.241793 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab667a9d-5e0b-4faa-909e-5f778579e853-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-47d82\" (UID: \"ab667a9d-5e0b-4faa-909e-5f778579e853\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.241822 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-images\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.242535 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-images\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.243349 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-cabundle\") pod \"service-ca-9c57cc56f-slln9\" (UID: \"9ad95836-c587-4ca7-b5fa-f878af1019b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.246633 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.246811 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-proxy-tls\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.247593 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/9ad95836-c587-4ca7-b5fa-f878af1019b6-signing-key\") pod \"service-ca-9c57cc56f-slln9\" (UID: \"9ad95836-c587-4ca7-b5fa-f878af1019b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.251635 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/ab667a9d-5e0b-4faa-909e-5f778579e853-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-47d82\" (UID: \"ab667a9d-5e0b-4faa-909e-5f778579e853\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.306768 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8p5tg\" (UniqueName: \"kubernetes.io/projected/ba221b2c-59ae-4358-9328-2639e1e4e1f9-kube-api-access-8p5tg\") pod \"authentication-operator-69f744f599-8hc98\" (UID: \"ba221b2c-59ae-4358-9328-2639e1e4e1f9\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.320225 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z9p7h\" (UniqueName: \"kubernetes.io/projected/521dbee5-5d69-4fd4-bcfc-8b2b4b404389-kube-api-access-z9p7h\") pod \"machine-approver-56656f9798-47m7l\" (UID: \"521dbee5-5d69-4fd4-bcfc-8b2b4b404389\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.326317 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.344785 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.365502 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.385633 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.406154 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.426331 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.445198 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.445494 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.466462 4804 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 11:24:25 crc kubenswrapper[4804]: W0128 11:24:25.468371 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod521dbee5_5d69_4fd4_bcfc_8b2b4b404389.slice/crio-4641c91e8899a2d26f25a7ac04c61d70d134776b38ba01765ec86eecc04cbe36 WatchSource:0}: Error finding container 4641c91e8899a2d26f25a7ac04c61d70d134776b38ba01765ec86eecc04cbe36: Status 404 returned error can't find the container with id 4641c91e8899a2d26f25a7ac04c61d70d134776b38ba01765ec86eecc04cbe36 Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.481918 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.486195 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.506765 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.540638 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj7lz\" (UniqueName: \"kubernetes.io/projected/ab667a9d-5e0b-4faa-909e-5f778579e853-kube-api-access-lj7lz\") pod \"package-server-manager-789f6589d5-47d82\" (UID: \"ab667a9d-5e0b-4faa-909e-5f778579e853\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.558282 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fwdbv\" (UniqueName: \"kubernetes.io/projected/1a74db24-5aca-48f9-889c-e37d8cdba99e-kube-api-access-fwdbv\") pod \"etcd-operator-b45778765-pgctg\" (UID: \"1a74db24-5aca-48f9-889c-e37d8cdba99e\") " pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.582608 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8q76\" (UniqueName: \"kubernetes.io/projected/57150906-6899-4d65-b5e5-5092215695b7-kube-api-access-w8q76\") pod \"openshift-config-operator-7777fb866f-g8nn2\" (UID: \"57150906-6899-4d65-b5e5-5092215695b7\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.603516 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2dce007c-8b8d-4271-bb40-7482176fc529-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-vdpgq\" (UID: \"2dce007c-8b8d-4271-bb40-7482176fc529\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.603622 4804 request.go:700] Waited for 1.889928854s due to client-side throttling, not priority and fairness, request: POST:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-cluster-samples-operator/serviceaccounts/cluster-samples-operator/token Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.610758 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" event={"ID":"521dbee5-5d69-4fd4-bcfc-8b2b4b404389","Type":"ContainerStarted","Data":"4641c91e8899a2d26f25a7ac04c61d70d134776b38ba01765ec86eecc04cbe36"} Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.624046 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64g4r\" (UniqueName: \"kubernetes.io/projected/46da2b10-cba3-46fa-a2f3-972499966fd3-kube-api-access-64g4r\") pod \"cluster-samples-operator-665b6dd947-4wpb6\" (UID: \"46da2b10-cba3-46fa-a2f3-972499966fd3\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.642445 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4rm6\" (UniqueName: \"kubernetes.io/projected/881a5709-4ff6-448e-ba75-caf5f7e61a5b-kube-api-access-p4rm6\") pod \"apiserver-7bbb656c7d-2kmn2\" (UID: \"881a5709-4ff6-448e-ba75-caf5f7e61a5b\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.661509 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpcsj\" (UniqueName: \"kubernetes.io/projected/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-kube-api-access-dpcsj\") pod \"controller-manager-879f6c89f-z4j56\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.664069 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.683068 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtt9h\" (UniqueName: \"kubernetes.io/projected/bf13c867-7c3e-4845-a6c8-f25700c31666-kube-api-access-dtt9h\") pod \"console-f9d7485db-xghdb\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.695032 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.707853 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4jch\" (UniqueName: \"kubernetes.io/projected/e2b8b707-60c9-4138-a4d8-d218162737fe-kube-api-access-l4jch\") pod \"machine-api-operator-5694c8668f-m5p7p\" (UID: \"e2b8b707-60c9-4138-a4d8-d218162737fe\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.709636 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-8hc98"] Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.722617 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kmtb8\" (UniqueName: \"kubernetes.io/projected/ffe68ef2-471a-42e3-a825-f90c8a5f6028-kube-api-access-kmtb8\") pod \"openshift-controller-manager-operator-756b6f6bc6-z5n98\" (UID: \"ffe68ef2-471a-42e3-a825-f90c8a5f6028\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" Jan 28 11:24:25 crc kubenswrapper[4804]: W0128 11:24:25.729202 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podba221b2c_59ae_4358_9328_2639e1e4e1f9.slice/crio-396b17af44e49b1540c54178c93fd38106d13f92d9832e437f0674f26b834e04 WatchSource:0}: Error finding container 396b17af44e49b1540c54178c93fd38106d13f92d9832e437f0674f26b834e04: Status 404 returned error can't find the container with id 396b17af44e49b1540c54178c93fd38106d13f92d9832e437f0674f26b834e04 Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.742739 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q848q\" (UniqueName: \"kubernetes.io/projected/4e425cf1-0352-47be-9c58-2bad27ccc3c1-kube-api-access-q848q\") pod \"downloads-7954f5f757-cljd9\" (UID: \"4e425cf1-0352-47be-9c58-2bad27ccc3c1\") " pod="openshift-console/downloads-7954f5f757-cljd9" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.756318 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.763720 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9k62\" (UniqueName: \"kubernetes.io/projected/0f90e352-ac01-40fb-bf8d-50500206f0ac-kube-api-access-b9k62\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.772276 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.784959 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8qf5z\" (UniqueName: \"kubernetes.io/projected/61387edd-4fc9-4cb7-8229-a6578d2d15fb-kube-api-access-8qf5z\") pod \"console-operator-58897d9998-6kll7\" (UID: \"61387edd-4fc9-4cb7-8229-a6578d2d15fb\") " pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.790251 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.800412 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.803742 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/0f90e352-ac01-40fb-bf8d-50500206f0ac-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-hxdnv\" (UID: \"0f90e352-ac01-40fb-bf8d-50500206f0ac\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.821062 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48l2b\" (UniqueName: \"kubernetes.io/projected/65cbbd20-6185-455b-814b-7de34194ec29-kube-api-access-48l2b\") pod \"apiserver-76f77b778f-vbjk6\" (UID: \"65cbbd20-6185-455b-814b-7de34194ec29\") " pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.830196 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.844345 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6mnd\" (UniqueName: \"kubernetes.io/projected/9d40e6f6-2a67-4ec3-a612-77c2f9f6517d-kube-api-access-n6mnd\") pod \"kube-storage-version-migrator-operator-b67b599dd-mcdcv\" (UID: \"9d40e6f6-2a67-4ec3-a612-77c2f9f6517d\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.867560 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2pzm\" (UniqueName: \"kubernetes.io/projected/9ad95836-c587-4ca7-b5fa-f878af1019b6-kube-api-access-v2pzm\") pod \"service-ca-9c57cc56f-slln9\" (UID: \"9ad95836-c587-4ca7-b5fa-f878af1019b6\") " pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.875612 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.881011 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfbh8\" (UniqueName: \"kubernetes.io/projected/cf33f13a-5328-47e6-8e14-1c0a84927117-kube-api-access-tfbh8\") pod \"router-default-5444994796-h44hn\" (UID: \"cf33f13a-5328-47e6-8e14-1c0a84927117\") " pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.900459 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x7k9g\" (UniqueName: \"kubernetes.io/projected/a7c281fd-3e5a-4edc-98f7-8703c1f08aab-kube-api-access-x7k9g\") pod \"machine-config-operator-74547568cd-6g5ff\" (UID: \"a7c281fd-3e5a-4edc-98f7-8703c1f08aab\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.910145 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.911200 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.925004 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mp29h\" (UniqueName: \"kubernetes.io/projected/43de728c-beeb-4fde-832b-dcf5097867e0-kube-api-access-mp29h\") pod \"dns-default-slcp9\" (UID: \"43de728c-beeb-4fde-832b-dcf5097867e0\") " pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.928925 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-cljd9" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.933742 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.940187 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.953178 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.953296 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7b4g\" (UniqueName: \"kubernetes.io/projected/625b312d-62b0-4965-966c-3605f4d649a4-kube-api-access-q7b4g\") pod \"dns-operator-744455d44c-mmdfp\" (UID: \"625b312d-62b0-4965-966c-3605f4d649a4\") " pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.954798 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2"] Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.976097 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-74dlk\" (UniqueName: \"kubernetes.io/projected/d56b6530-c7d7-432d-bd5e-1a07a2d94515-kube-api-access-74dlk\") pod \"route-controller-manager-6576b87f9c-wg94f\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.978765 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.983407 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.987639 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbhxq\" (UniqueName: \"kubernetes.io/projected/61b65dc4-6aaf-4578-adf4-64759773196a-kube-api-access-cbhxq\") pod \"openshift-apiserver-operator-796bbdcf4f-66fn9\" (UID: \"61b65dc4-6aaf-4578-adf4-64759773196a\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" Jan 28 11:24:25 crc kubenswrapper[4804]: I0128 11:24:25.995686 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.002832 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.012959 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.036417 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-slln9" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.055268 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/50a5d490-28ef-438f-b03c-6b15d30bbb1e-proxy-tls\") pod \"machine-config-controller-84d6567774-8m8b7\" (UID: \"50a5d490-28ef-438f-b03c-6b15d30bbb1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.057811 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4cdff00-d1aa-4535-b269-b692986cd76c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8rkrw\" (UID: \"c4cdff00-d1aa-4535-b269-b692986cd76c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.057861 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-dir\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.057914 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/436e3017-a787-4e60-97cd-7cc0cdd47a2d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.057966 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ml79j\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058004 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a8cd21-66e2-4a77-9596-ea6af6f4f2b3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mtffd\" (UID: \"33a8cd21-66e2-4a77-9596-ea6af6f4f2b3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058024 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f90c0f76-ca48-4b2f-89cc-b90cc1172576-srv-cert\") pod \"catalog-operator-68c6474976-n9ds8\" (UID: \"f90c0f76-ca48-4b2f-89cc-b90cc1172576\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058043 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058094 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058144 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ss8l\" (UniqueName: \"kubernetes.io/projected/50a5d490-28ef-438f-b03c-6b15d30bbb1e-kube-api-access-2ss8l\") pod \"machine-config-controller-84d6567774-8m8b7\" (UID: \"50a5d490-28ef-438f-b03c-6b15d30bbb1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058168 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-trusted-ca\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058197 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058272 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nhdq\" (UniqueName: \"kubernetes.io/projected/ae7433f6-40cb-4caf-8356-10bb93645af5-kube-api-access-4nhdq\") pod \"collect-profiles-29493315-jjvdr\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058294 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwljj\" (UniqueName: \"kubernetes.io/projected/456e451f-8bcc-49ad-a5e8-502c294e8518-kube-api-access-lwljj\") pod \"service-ca-operator-777779d784-gnlpm\" (UID: \"456e451f-8bcc-49ad-a5e8-502c294e8518\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058333 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-certificates\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058353 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c03ebf08-d5a0-48b4-a1ca-3eec30c14490-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f822b\" (UID: \"c03ebf08-d5a0-48b4-a1ca-3eec30c14490\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058378 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f90c0f76-ca48-4b2f-89cc-b90cc1172576-profile-collector-cert\") pod \"catalog-operator-68c6474976-n9ds8\" (UID: \"f90c0f76-ca48-4b2f-89cc-b90cc1172576\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058400 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/456e451f-8bcc-49ad-a5e8-502c294e8518-serving-cert\") pod \"service-ca-operator-777779d784-gnlpm\" (UID: \"456e451f-8bcc-49ad-a5e8-502c294e8518\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058425 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kx4f2\" (UniqueName: \"kubernetes.io/projected/f90c0f76-ca48-4b2f-89cc-b90cc1172576-kube-api-access-kx4f2\") pod \"catalog-operator-68c6474976-n9ds8\" (UID: \"f90c0f76-ca48-4b2f-89cc-b90cc1172576\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058443 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.058477 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.059876 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/456e451f-8bcc-49ad-a5e8-502c294e8518-config\") pod \"service-ca-operator-777779d784-gnlpm\" (UID: \"456e451f-8bcc-49ad-a5e8-502c294e8518\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.067917 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnf5b\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-kube-api-access-mnf5b\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.067972 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.067997 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vl2p\" (UniqueName: \"kubernetes.io/projected/5054f20f-444d-40e8-ad18-3515e1ff2638-kube-api-access-6vl2p\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.068029 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-webhook-cert\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.069768 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-tls\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.069799 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-metrics-tls\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.069819 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.069838 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.069855 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4cdff00-d1aa-4535-b269-b692986cd76c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8rkrw\" (UID: \"c4cdff00-d1aa-4535-b269-b692986cd76c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.070535 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/436e3017-a787-4e60-97cd-7cc0cdd47a2d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.070570 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/33a8cd21-66e2-4a77-9596-ea6af6f4f2b3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mtffd\" (UID: \"33a8cd21-66e2-4a77-9596-ea6af6f4f2b3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.070590 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nhpl9\" (UniqueName: \"kubernetes.io/projected/349fe87d-e741-4dc4-bc78-322b541e0a3f-kube-api-access-nhpl9\") pod \"olm-operator-6b444d44fb-7ncgb\" (UID: \"349fe87d-e741-4dc4-bc78-322b541e0a3f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.070705 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-policies\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.070731 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/349fe87d-e741-4dc4-bc78-322b541e0a3f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7ncgb\" (UID: \"349fe87d-e741-4dc4-bc78-322b541e0a3f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.070754 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d0c2686a-d8ed-4c34-8677-4371daf94ea4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gsq9d\" (UID: \"d0c2686a-d8ed-4c34-8677-4371daf94ea4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.070776 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae7433f6-40cb-4caf-8356-10bb93645af5-secret-volume\") pod \"collect-profiles-29493315-jjvdr\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.070805 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/50a5d490-28ef-438f-b03c-6b15d30bbb1e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8m8b7\" (UID: \"50a5d490-28ef-438f-b03c-6b15d30bbb1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.070825 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxjxq\" (UniqueName: \"kubernetes.io/projected/9927b5d4-5460-4d78-9320-af3916443c1a-kube-api-access-pxjxq\") pod \"migrator-59844c95c7-qdn6v\" (UID: \"9927b5d4-5460-4d78-9320-af3916443c1a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.070862 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.070954 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-trusted-ca\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.071047 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.071097 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbc69\" (UniqueName: \"kubernetes.io/projected/d0c2686a-d8ed-4c34-8677-4371daf94ea4-kube-api-access-sbc69\") pod \"multus-admission-controller-857f4d67dd-gsq9d\" (UID: \"d0c2686a-d8ed-4c34-8677-4371daf94ea4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.071117 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-apiservice-cert\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.071142 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-bound-sa-token\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.072738 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:26.572716657 +0000 UTC m=+142.367596841 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.072772 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkbb8\" (UniqueName: \"kubernetes.io/projected/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-kube-api-access-zkbb8\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.072798 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33a8cd21-66e2-4a77-9596-ea6af6f4f2b3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mtffd\" (UID: \"33a8cd21-66e2-4a77-9596-ea6af6f4f2b3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.072843 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cdff00-d1aa-4535-b269-b692986cd76c-config\") pod \"kube-apiserver-operator-766d6c64bb-8rkrw\" (UID: \"c4cdff00-d1aa-4535-b269-b692986cd76c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.072909 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ml79j\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.072973 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-bound-sa-token\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.073617 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae7433f6-40cb-4caf-8356-10bb93645af5-config-volume\") pod \"collect-profiles-29493315-jjvdr\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.073694 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppxwf\" (UniqueName: \"kubernetes.io/projected/bb959019-0f9d-4210-8410-6b3c00b02337-kube-api-access-ppxwf\") pod \"marketplace-operator-79b997595-ml79j\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.073780 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkhjz\" (UniqueName: \"kubernetes.io/projected/c03ebf08-d5a0-48b4-a1ca-3eec30c14490-kube-api-access-hkhjz\") pod \"control-plane-machine-set-operator-78cbb6b69f-f822b\" (UID: \"c03ebf08-d5a0-48b4-a1ca-3eec30c14490\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.073833 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-tmpfs\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.074014 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz7pt\" (UniqueName: \"kubernetes.io/projected/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-kube-api-access-fz7pt\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.074155 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/349fe87d-e741-4dc4-bc78-322b541e0a3f-srv-cert\") pod \"olm-operator-6b444d44fb-7ncgb\" (UID: \"349fe87d-e741-4dc4-bc78-322b541e0a3f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.074178 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.074245 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.160860 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.174954 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.175534 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.175755 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kx4f2\" (UniqueName: \"kubernetes.io/projected/f90c0f76-ca48-4b2f-89cc-b90cc1172576-kube-api-access-kx4f2\") pod \"catalog-operator-68c6474976-n9ds8\" (UID: \"f90c0f76-ca48-4b2f-89cc-b90cc1172576\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.175790 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176100 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176131 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/456e451f-8bcc-49ad-a5e8-502c294e8518-config\") pod \"service-ca-operator-777779d784-gnlpm\" (UID: \"456e451f-8bcc-49ad-a5e8-502c294e8518\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176158 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8822fc2c-fece-436d-bd9b-d6ff2fbb72fb-certs\") pod \"machine-config-server-97kr8\" (UID: \"8822fc2c-fece-436d-bd9b-d6ff2fbb72fb\") " pod="openshift-machine-config-operator/machine-config-server-97kr8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176189 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176215 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnf5b\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-kube-api-access-mnf5b\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176238 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6vl2p\" (UniqueName: \"kubernetes.io/projected/5054f20f-444d-40e8-ad18-3515e1ff2638-kube-api-access-6vl2p\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176260 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-webhook-cert\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176279 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-tls\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176297 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-metrics-tls\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176317 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176336 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176353 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4cdff00-d1aa-4535-b269-b692986cd76c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8rkrw\" (UID: \"c4cdff00-d1aa-4535-b269-b692986cd76c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176404 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/436e3017-a787-4e60-97cd-7cc0cdd47a2d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176428 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/33a8cd21-66e2-4a77-9596-ea6af6f4f2b3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mtffd\" (UID: \"33a8cd21-66e2-4a77-9596-ea6af6f4f2b3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176444 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nhpl9\" (UniqueName: \"kubernetes.io/projected/349fe87d-e741-4dc4-bc78-322b541e0a3f-kube-api-access-nhpl9\") pod \"olm-operator-6b444d44fb-7ncgb\" (UID: \"349fe87d-e741-4dc4-bc78-322b541e0a3f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176483 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lcpg\" (UniqueName: \"kubernetes.io/projected/113634df-0b68-4670-8c3d-8d227c626095-kube-api-access-6lcpg\") pod \"ingress-canary-vc78g\" (UID: \"113634df-0b68-4670-8c3d-8d227c626095\") " pod="openshift-ingress-canary/ingress-canary-vc78g" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176512 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/349fe87d-e741-4dc4-bc78-322b541e0a3f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7ncgb\" (UID: \"349fe87d-e741-4dc4-bc78-322b541e0a3f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176540 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-policies\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176564 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b9vk\" (UniqueName: \"kubernetes.io/projected/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-kube-api-access-8b9vk\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176586 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae7433f6-40cb-4caf-8356-10bb93645af5-secret-volume\") pod \"collect-profiles-29493315-jjvdr\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.176661 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:26.676631683 +0000 UTC m=+142.471511667 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176692 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d0c2686a-d8ed-4c34-8677-4371daf94ea4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gsq9d\" (UID: \"d0c2686a-d8ed-4c34-8677-4371daf94ea4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176778 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/50a5d490-28ef-438f-b03c-6b15d30bbb1e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8m8b7\" (UID: \"50a5d490-28ef-438f-b03c-6b15d30bbb1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176806 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxjxq\" (UniqueName: \"kubernetes.io/projected/9927b5d4-5460-4d78-9320-af3916443c1a-kube-api-access-pxjxq\") pod \"migrator-59844c95c7-qdn6v\" (UID: \"9927b5d4-5460-4d78-9320-af3916443c1a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176849 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176894 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8822fc2c-fece-436d-bd9b-d6ff2fbb72fb-node-bootstrap-token\") pod \"machine-config-server-97kr8\" (UID: \"8822fc2c-fece-436d-bd9b-d6ff2fbb72fb\") " pod="openshift-machine-config-operator/machine-config-server-97kr8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176922 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-trusted-ca\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.176947 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbc69\" (UniqueName: \"kubernetes.io/projected/d0c2686a-d8ed-4c34-8677-4371daf94ea4-kube-api-access-sbc69\") pod \"multus-admission-controller-857f4d67dd-gsq9d\" (UID: \"d0c2686a-d8ed-4c34-8677-4371daf94ea4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182367 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182413 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-apiservice-cert\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182443 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-bound-sa-token\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.178820 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182498 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkbb8\" (UniqueName: \"kubernetes.io/projected/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-kube-api-access-zkbb8\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182531 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33a8cd21-66e2-4a77-9596-ea6af6f4f2b3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mtffd\" (UID: \"33a8cd21-66e2-4a77-9596-ea6af6f4f2b3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182555 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cdff00-d1aa-4535-b269-b692986cd76c-config\") pod \"kube-apiserver-operator-766d6c64bb-8rkrw\" (UID: \"c4cdff00-d1aa-4535-b269-b692986cd76c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182590 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-bound-sa-token\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182632 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae7433f6-40cb-4caf-8356-10bb93645af5-config-volume\") pod \"collect-profiles-29493315-jjvdr\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182658 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ml79j\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182689 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppxwf\" (UniqueName: \"kubernetes.io/projected/bb959019-0f9d-4210-8410-6b3c00b02337-kube-api-access-ppxwf\") pod \"marketplace-operator-79b997595-ml79j\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182715 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkhjz\" (UniqueName: \"kubernetes.io/projected/c03ebf08-d5a0-48b4-a1ca-3eec30c14490-kube-api-access-hkhjz\") pod \"control-plane-machine-set-operator-78cbb6b69f-f822b\" (UID: \"c03ebf08-d5a0-48b4-a1ca-3eec30c14490\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182741 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-tmpfs\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.181241 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-policies\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.182769 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz7pt\" (UniqueName: \"kubernetes.io/projected/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-kube-api-access-fz7pt\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.179854 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/50a5d490-28ef-438f-b03c-6b15d30bbb1e-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-8m8b7\" (UID: \"50a5d490-28ef-438f-b03c-6b15d30bbb1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.183011 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-registration-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.179543 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/456e451f-8bcc-49ad-a5e8-502c294e8518-config\") pod \"service-ca-operator-777779d784-gnlpm\" (UID: \"456e451f-8bcc-49ad-a5e8-502c294e8518\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.183085 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkz5n\" (UniqueName: \"kubernetes.io/projected/8822fc2c-fece-436d-bd9b-d6ff2fbb72fb-kube-api-access-pkz5n\") pod \"machine-config-server-97kr8\" (UID: \"8822fc2c-fece-436d-bd9b-d6ff2fbb72fb\") " pod="openshift-machine-config-operator/machine-config-server-97kr8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.181521 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-trusted-ca\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.177481 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.184574 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/349fe87d-e741-4dc4-bc78-322b541e0a3f-srv-cert\") pod \"olm-operator-6b444d44fb-7ncgb\" (UID: \"349fe87d-e741-4dc4-bc78-322b541e0a3f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.184773 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.185219 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.185710 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/50a5d490-28ef-438f-b03c-6b15d30bbb1e-proxy-tls\") pod \"machine-config-controller-84d6567774-8m8b7\" (UID: \"50a5d490-28ef-438f-b03c-6b15d30bbb1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.185786 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4cdff00-d1aa-4535-b269-b692986cd76c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8rkrw\" (UID: \"c4cdff00-d1aa-4535-b269-b692986cd76c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.185817 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/436e3017-a787-4e60-97cd-7cc0cdd47a2d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.185839 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-dir\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.185917 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/113634df-0b68-4670-8c3d-8d227c626095-cert\") pod \"ingress-canary-vc78g\" (UID: \"113634df-0b68-4670-8c3d-8d227c626095\") " pod="openshift-ingress-canary/ingress-canary-vc78g" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186201 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-socket-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186254 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ml79j\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186280 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a8cd21-66e2-4a77-9596-ea6af6f4f2b3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mtffd\" (UID: \"33a8cd21-66e2-4a77-9596-ea6af6f4f2b3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186320 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f90c0f76-ca48-4b2f-89cc-b90cc1172576-srv-cert\") pod \"catalog-operator-68c6474976-n9ds8\" (UID: \"f90c0f76-ca48-4b2f-89cc-b90cc1172576\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186345 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186366 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-plugins-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186445 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186506 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2ss8l\" (UniqueName: \"kubernetes.io/projected/50a5d490-28ef-438f-b03c-6b15d30bbb1e-kube-api-access-2ss8l\") pod \"machine-config-controller-84d6567774-8m8b7\" (UID: \"50a5d490-28ef-438f-b03c-6b15d30bbb1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186526 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-mountpoint-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186581 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-trusted-ca\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186602 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186643 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nhdq\" (UniqueName: \"kubernetes.io/projected/ae7433f6-40cb-4caf-8356-10bb93645af5-kube-api-access-4nhdq\") pod \"collect-profiles-29493315-jjvdr\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186667 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwljj\" (UniqueName: \"kubernetes.io/projected/456e451f-8bcc-49ad-a5e8-502c294e8518-kube-api-access-lwljj\") pod \"service-ca-operator-777779d784-gnlpm\" (UID: \"456e451f-8bcc-49ad-a5e8-502c294e8518\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186684 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-csi-data-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186756 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-certificates\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186779 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c03ebf08-d5a0-48b4-a1ca-3eec30c14490-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f822b\" (UID: \"c03ebf08-d5a0-48b4-a1ca-3eec30c14490\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186821 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f90c0f76-ca48-4b2f-89cc-b90cc1172576-profile-collector-cert\") pod \"catalog-operator-68c6474976-n9ds8\" (UID: \"f90c0f76-ca48-4b2f-89cc-b90cc1172576\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.186853 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/456e451f-8bcc-49ad-a5e8-502c294e8518-serving-cert\") pod \"service-ca-operator-777779d784-gnlpm\" (UID: \"456e451f-8bcc-49ad-a5e8-502c294e8518\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.187758 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-tmpfs\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.188910 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae7433f6-40cb-4caf-8356-10bb93645af5-config-volume\") pod \"collect-profiles-29493315-jjvdr\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.190736 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-webhook-cert\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.191030 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:26.691005794 +0000 UTC m=+142.485885778 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.193153 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-ml79j\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.192457 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.192895 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-metrics-tls\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.191787 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-dir\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.192994 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c4cdff00-d1aa-4535-b269-b692986cd76c-config\") pod \"kube-apiserver-operator-766d6c64bb-8rkrw\" (UID: \"c4cdff00-d1aa-4535-b269-b692986cd76c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.193565 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/33a8cd21-66e2-4a77-9596-ea6af6f4f2b3-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mtffd\" (UID: \"33a8cd21-66e2-4a77-9596-ea6af6f4f2b3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.193715 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/436e3017-a787-4e60-97cd-7cc0cdd47a2d-ca-trust-extracted\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.194771 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/436e3017-a787-4e60-97cd-7cc0cdd47a2d-installation-pull-secrets\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.196081 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/349fe87d-e741-4dc4-bc78-322b541e0a3f-srv-cert\") pod \"olm-operator-6b444d44fb-7ncgb\" (UID: \"349fe87d-e741-4dc4-bc78-322b541e0a3f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.196158 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-certificates\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.197001 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-trusted-ca\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.201105 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.201191 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f90c0f76-ca48-4b2f-89cc-b90cc1172576-profile-collector-cert\") pod \"catalog-operator-68c6474976-n9ds8\" (UID: \"f90c0f76-ca48-4b2f-89cc-b90cc1172576\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.201413 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.201457 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.201761 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/349fe87d-e741-4dc4-bc78-322b541e0a3f-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7ncgb\" (UID: \"349fe87d-e741-4dc4-bc78-322b541e0a3f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.202376 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae7433f6-40cb-4caf-8356-10bb93645af5-secret-volume\") pod \"collect-profiles-29493315-jjvdr\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.202995 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.203734 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/d0c2686a-d8ed-4c34-8677-4371daf94ea4-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-gsq9d\" (UID: \"d0c2686a-d8ed-4c34-8677-4371daf94ea4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.205536 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/50a5d490-28ef-438f-b03c-6b15d30bbb1e-proxy-tls\") pod \"machine-config-controller-84d6567774-8m8b7\" (UID: \"50a5d490-28ef-438f-b03c-6b15d30bbb1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.206234 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.206304 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.206446 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f90c0f76-ca48-4b2f-89cc-b90cc1172576-srv-cert\") pod \"catalog-operator-68c6474976-n9ds8\" (UID: \"f90c0f76-ca48-4b2f-89cc-b90cc1172576\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.206793 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-tls\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.209605 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.212859 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/c03ebf08-d5a0-48b4-a1ca-3eec30c14490-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-f822b\" (UID: \"c03ebf08-d5a0-48b4-a1ca-3eec30c14490\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.214808 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82"] Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.216035 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-apiservice-cert\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.216254 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/33a8cd21-66e2-4a77-9596-ea6af6f4f2b3-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mtffd\" (UID: \"33a8cd21-66e2-4a77-9596-ea6af6f4f2b3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.216345 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c4cdff00-d1aa-4535-b269-b692986cd76c-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-8rkrw\" (UID: \"c4cdff00-d1aa-4535-b269-b692986cd76c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.216473 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/456e451f-8bcc-49ad-a5e8-502c294e8518-serving-cert\") pod \"service-ca-operator-777779d784-gnlpm\" (UID: \"456e451f-8bcc-49ad-a5e8-502c294e8518\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.217449 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.218266 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-ml79j\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.226333 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kx4f2\" (UniqueName: \"kubernetes.io/projected/f90c0f76-ca48-4b2f-89cc-b90cc1172576-kube-api-access-kx4f2\") pod \"catalog-operator-68c6474976-n9ds8\" (UID: \"f90c0f76-ca48-4b2f-89cc-b90cc1172576\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:26 crc kubenswrapper[4804]: W0128 11:24:26.232034 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcf33f13a_5328_47e6_8e14_1c0a84927117.slice/crio-b107089832d3e98f6e3468e44f02410eb5128b364a347fae1eab151f2eccb0b5 WatchSource:0}: Error finding container b107089832d3e98f6e3468e44f02410eb5128b364a347fae1eab151f2eccb0b5: Status 404 returned error can't find the container with id b107089832d3e98f6e3468e44f02410eb5128b364a347fae1eab151f2eccb0b5 Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.261685 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c4cdff00-d1aa-4535-b269-b692986cd76c-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-8rkrw\" (UID: \"c4cdff00-d1aa-4535-b269-b692986cd76c\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.271493 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6vl2p\" (UniqueName: \"kubernetes.io/projected/5054f20f-444d-40e8-ad18-3515e1ff2638-kube-api-access-6vl2p\") pod \"oauth-openshift-558db77b4-44lsd\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.292086 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.292450 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/113634df-0b68-4670-8c3d-8d227c626095-cert\") pod \"ingress-canary-vc78g\" (UID: \"113634df-0b68-4670-8c3d-8d227c626095\") " pod="openshift-ingress-canary/ingress-canary-vc78g" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.292507 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-socket-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.292535 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-plugins-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.292602 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-mountpoint-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.292638 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-csi-data-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.292675 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8822fc2c-fece-436d-bd9b-d6ff2fbb72fb-certs\") pod \"machine-config-server-97kr8\" (UID: \"8822fc2c-fece-436d-bd9b-d6ff2fbb72fb\") " pod="openshift-machine-config-operator/machine-config-server-97kr8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.292732 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lcpg\" (UniqueName: \"kubernetes.io/projected/113634df-0b68-4670-8c3d-8d227c626095-kube-api-access-6lcpg\") pod \"ingress-canary-vc78g\" (UID: \"113634df-0b68-4670-8c3d-8d227c626095\") " pod="openshift-ingress-canary/ingress-canary-vc78g" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.292752 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8b9vk\" (UniqueName: \"kubernetes.io/projected/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-kube-api-access-8b9vk\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.292782 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8822fc2c-fece-436d-bd9b-d6ff2fbb72fb-node-bootstrap-token\") pod \"machine-config-server-97kr8\" (UID: \"8822fc2c-fece-436d-bd9b-d6ff2fbb72fb\") " pod="openshift-machine-config-operator/machine-config-server-97kr8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.292861 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-registration-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.295084 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-mountpoint-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.295228 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:26.795187289 +0000 UTC m=+142.590067273 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.295448 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkz5n\" (UniqueName: \"kubernetes.io/projected/8822fc2c-fece-436d-bd9b-d6ff2fbb72fb-kube-api-access-pkz5n\") pod \"machine-config-server-97kr8\" (UID: \"8822fc2c-fece-436d-bd9b-d6ff2fbb72fb\") " pod="openshift-machine-config-operator/machine-config-server-97kr8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.295695 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-csi-data-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.297800 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-socket-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.298061 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-plugins-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.298189 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-registration-dir\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.308426 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxjxq\" (UniqueName: \"kubernetes.io/projected/9927b5d4-5460-4d78-9320-af3916443c1a-kube-api-access-pxjxq\") pod \"migrator-59844c95c7-qdn6v\" (UID: \"9927b5d4-5460-4d78-9320-af3916443c1a\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.309499 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/8822fc2c-fece-436d-bd9b-d6ff2fbb72fb-node-bootstrap-token\") pod \"machine-config-server-97kr8\" (UID: \"8822fc2c-fece-436d-bd9b-d6ff2fbb72fb\") " pod="openshift-machine-config-operator/machine-config-server-97kr8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.310756 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/113634df-0b68-4670-8c3d-8d227c626095-cert\") pod \"ingress-canary-vc78g\" (UID: \"113634df-0b68-4670-8c3d-8d227c626095\") " pod="openshift-ingress-canary/ingress-canary-vc78g" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.312611 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/33a8cd21-66e2-4a77-9596-ea6af6f4f2b3-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-mtffd\" (UID: \"33a8cd21-66e2-4a77-9596-ea6af6f4f2b3\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.318197 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/8822fc2c-fece-436d-bd9b-d6ff2fbb72fb-certs\") pod \"machine-config-server-97kr8\" (UID: \"8822fc2c-fece-436d-bd9b-d6ff2fbb72fb\") " pod="openshift-machine-config-operator/machine-config-server-97kr8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.327347 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.336672 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nhpl9\" (UniqueName: \"kubernetes.io/projected/349fe87d-e741-4dc4-bc78-322b541e0a3f-kube-api-access-nhpl9\") pod \"olm-operator-6b444d44fb-7ncgb\" (UID: \"349fe87d-e741-4dc4-bc78-322b541e0a3f\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.345610 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.350509 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnf5b\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-kube-api-access-mnf5b\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.374428 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbc69\" (UniqueName: \"kubernetes.io/projected/d0c2686a-d8ed-4c34-8677-4371daf94ea4-kube-api-access-sbc69\") pod \"multus-admission-controller-857f4d67dd-gsq9d\" (UID: \"d0c2686a-d8ed-4c34-8677-4371daf94ea4\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.389192 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz7pt\" (UniqueName: \"kubernetes.io/projected/3a69cec6-e1b7-4e4d-88f7-de85e459ed7b-kube-api-access-fz7pt\") pod \"packageserver-d55dfcdfc-2xbh5\" (UID: \"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.398966 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.399376 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:26.899362993 +0000 UTC m=+142.694242977 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.403443 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-bound-sa-token\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.420158 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppxwf\" (UniqueName: \"kubernetes.io/projected/bb959019-0f9d-4210-8410-6b3c00b02337-kube-api-access-ppxwf\") pod \"marketplace-operator-79b997595-ml79j\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.465916 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.467046 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-bound-sa-token\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.477580 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkhjz\" (UniqueName: \"kubernetes.io/projected/c03ebf08-d5a0-48b4-a1ca-3eec30c14490-kube-api-access-hkhjz\") pod \"control-plane-machine-set-operator-78cbb6b69f-f822b\" (UID: \"c03ebf08-d5a0-48b4-a1ca-3eec30c14490\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.499237 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.500710 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.506325 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkbb8\" (UniqueName: \"kubernetes.io/projected/6ba550eb-2fae-4448-9bc8-7c8ecf3de616-kube-api-access-zkbb8\") pod \"ingress-operator-5b745b69d9-v79mb\" (UID: \"6ba550eb-2fae-4448-9bc8-7c8ecf3de616\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.507373 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.507608 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.00758708 +0000 UTC m=+142.802467064 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.507988 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.509749 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.00972833 +0000 UTC m=+142.804608314 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.511315 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2ss8l\" (UniqueName: \"kubernetes.io/projected/50a5d490-28ef-438f-b03c-6b15d30bbb1e-kube-api-access-2ss8l\") pod \"machine-config-controller-84d6567774-8m8b7\" (UID: \"50a5d490-28ef-438f-b03c-6b15d30bbb1e\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.524223 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.531036 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nhdq\" (UniqueName: \"kubernetes.io/projected/ae7433f6-40cb-4caf-8356-10bb93645af5-kube-api-access-4nhdq\") pod \"collect-profiles-29493315-jjvdr\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.545786 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.549105 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwljj\" (UniqueName: \"kubernetes.io/projected/456e451f-8bcc-49ad-a5e8-502c294e8518-kube-api-access-lwljj\") pod \"service-ca-operator-777779d784-gnlpm\" (UID: \"456e451f-8bcc-49ad-a5e8-502c294e8518\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.567160 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-pgctg"] Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.567752 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.572959 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.587647 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.592666 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkz5n\" (UniqueName: \"kubernetes.io/projected/8822fc2c-fece-436d-bd9b-d6ff2fbb72fb-kube-api-access-pkz5n\") pod \"machine-config-server-97kr8\" (UID: \"8822fc2c-fece-436d-bd9b-d6ff2fbb72fb\") " pod="openshift-machine-config-operator/machine-config-server-97kr8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.594577 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98"] Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.604282 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.606392 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8b9vk\" (UniqueName: \"kubernetes.io/projected/609fd77d-7c9e-4a3f-855f-8aca45b53f4d-kube-api-access-8b9vk\") pod \"csi-hostpathplugin-qj7pb\" (UID: \"609fd77d-7c9e-4a3f-855f-8aca45b53f4d\") " pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.610362 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.610618 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.611168 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.111144414 +0000 UTC m=+142.906024398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.611427 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.611824 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.111813236 +0000 UTC m=+142.906693220 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.618424 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.656413 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" event={"ID":"ab667a9d-5e0b-4faa-909e-5f778579e853","Type":"ContainerStarted","Data":"c34e549ef82e1de0c5ed647540a9639d2333b3334a982a8e78d4fa3ecb19f65f"} Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.656631 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" event={"ID":"ab667a9d-5e0b-4faa-909e-5f778579e853","Type":"ContainerStarted","Data":"4c605d8dc690a359bbaa89ddb6dec96698b0e5a7bcd497622f21cfa5e3fe6d5a"} Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.657105 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lcpg\" (UniqueName: \"kubernetes.io/projected/113634df-0b68-4670-8c3d-8d227c626095-kube-api-access-6lcpg\") pod \"ingress-canary-vc78g\" (UID: \"113634df-0b68-4670-8c3d-8d227c626095\") " pod="openshift-ingress-canary/ingress-canary-vc78g" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.665069 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-97kr8" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.676221 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-6kll7"] Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.690520 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.712427 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.712971 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.2129486 +0000 UTC m=+143.007828584 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.737115 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" event={"ID":"521dbee5-5d69-4fd4-bcfc-8b2b4b404389","Type":"ContainerStarted","Data":"8e98121d156a21d12f1db855471e30841edd1c80eab6747236be7c1d067475b0"} Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.737150 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" event={"ID":"521dbee5-5d69-4fd4-bcfc-8b2b4b404389","Type":"ContainerStarted","Data":"838a4034ac2c2f3026532b6c5f77a741f55bd0351a97d102fe712ff314e6e9a6"} Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.753532 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h44hn" event={"ID":"cf33f13a-5328-47e6-8e14-1c0a84927117","Type":"ContainerStarted","Data":"9f59e2244a434602fa38d4428dfb3cec1341b3c14baac404a3dd53809eb264d8"} Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.753637 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-h44hn" event={"ID":"cf33f13a-5328-47e6-8e14-1c0a84927117","Type":"ContainerStarted","Data":"b107089832d3e98f6e3468e44f02410eb5128b364a347fae1eab151f2eccb0b5"} Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.754679 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.762538 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4j56"] Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.773018 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq"] Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.781639 4804 generic.go:334] "Generic (PLEG): container finished" podID="57150906-6899-4d65-b5e5-5092215695b7" containerID="a3bd50a4ac59b751fc34755a18bd2719b34f77d96924a2c6bf5778afe5316be5" exitCode=0 Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.781766 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" event={"ID":"57150906-6899-4d65-b5e5-5092215695b7","Type":"ContainerDied","Data":"a3bd50a4ac59b751fc34755a18bd2719b34f77d96924a2c6bf5778afe5316be5"} Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.781858 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" event={"ID":"57150906-6899-4d65-b5e5-5092215695b7","Type":"ContainerStarted","Data":"1b698278f4e842310cd35893a303cee77e52661c63b15aac54280476e91040c1"} Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.795611 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" event={"ID":"ba221b2c-59ae-4358-9328-2639e1e4e1f9","Type":"ContainerStarted","Data":"8e34ee944e370d7953eefcf3fbfc0b2b8bf90e9f87c76853c0a8a0b91a834407"} Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.795694 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" event={"ID":"ba221b2c-59ae-4358-9328-2639e1e4e1f9","Type":"ContainerStarted","Data":"396b17af44e49b1540c54178c93fd38106d13f92d9832e437f0674f26b834e04"} Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.814245 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.814713 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.314697736 +0000 UTC m=+143.109577720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: W0128 11:24:26.812851 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61387edd_4fc9_4cb7_8229_a6578d2d15fb.slice/crio-4fd92ad5b38c409bbbb9f0a38c3e7e188bb19c1833a427b53349f676e9af8a2e WatchSource:0}: Error finding container 4fd92ad5b38c409bbbb9f0a38c3e7e188bb19c1833a427b53349f676e9af8a2e: Status 404 returned error can't find the container with id 4fd92ad5b38c409bbbb9f0a38c3e7e188bb19c1833a427b53349f676e9af8a2e Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.918358 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:26 crc kubenswrapper[4804]: E0128 11:24:26.921655 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.421632851 +0000 UTC m=+143.216512835 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:26 crc kubenswrapper[4804]: I0128 11:24:26.954012 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vc78g" Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.020852 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:27 crc kubenswrapper[4804]: E0128 11:24:27.021217 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.521204494 +0000 UTC m=+143.316084478 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.122408 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:27 crc kubenswrapper[4804]: E0128 11:24:27.123231 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.623199067 +0000 UTC m=+143.418079051 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.123589 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:27 crc kubenswrapper[4804]: E0128 11:24:27.124042 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.624034235 +0000 UTC m=+143.418914219 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.157632 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.177917 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.177981 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.183539 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:27 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:27 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:27 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.183609 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.192768 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-m5p7p"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.214974 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-slcp9"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.218513 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-cljd9"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.225156 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:27 crc kubenswrapper[4804]: E0128 11:24:27.225600 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.725578172 +0000 UTC m=+143.520458156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.232032 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-slln9"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.240754 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-h44hn" podStartSLOduration=121.240727039 podStartE2EDuration="2m1.240727039s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:27.237340038 +0000 UTC m=+143.032220022" watchObservedRunningTime="2026-01-28 11:24:27.240727039 +0000 UTC m=+143.035607013" Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.241697 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.242510 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mmdfp"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.244590 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-xghdb"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.249194 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.250653 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-vbjk6"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.272100 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.279996 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.289406 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.291847 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-8hc98" podStartSLOduration=121.291823724 podStartE2EDuration="2m1.291823724s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:27.281456534 +0000 UTC m=+143.076336528" watchObservedRunningTime="2026-01-28 11:24:27.291823724 +0000 UTC m=+143.086703708" Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.356016 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:27 crc kubenswrapper[4804]: E0128 11:24:27.359932 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.859901155 +0000 UTC m=+143.654781169 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.458922 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:27 crc kubenswrapper[4804]: E0128 11:24:27.460014 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:27.959979936 +0000 UTC m=+143.754859920 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.537772 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-44lsd"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.543259 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.549641 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.561436 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.562302 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:27 crc kubenswrapper[4804]: E0128 11:24:27.562698 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:28.062680551 +0000 UTC m=+143.857560535 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.570272 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v"] Jan 28 11:24:27 crc kubenswrapper[4804]: W0128 11:24:27.572276 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd56b6530_c7d7_432d_bd5e_1a07a2d94515.slice/crio-5fd5c872cb044c160a4ff18aa2a4c6121bf64074f3146f4655562dbf6f1c2b4e WatchSource:0}: Error finding container 5fd5c872cb044c160a4ff18aa2a4c6121bf64074f3146f4655562dbf6f1c2b4e: Status 404 returned error can't find the container with id 5fd5c872cb044c160a4ff18aa2a4c6121bf64074f3146f4655562dbf6f1c2b4e Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.582811 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.600660 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.617390 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm"] Jan 28 11:24:27 crc kubenswrapper[4804]: W0128 11:24:27.658304 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5054f20f_444d_40e8_ad18_3515e1ff2638.slice/crio-3604a1ed363f990559841eb45c533c06695cdf71dd2d767f1fae173b03ac7671 WatchSource:0}: Error finding container 3604a1ed363f990559841eb45c533c06695cdf71dd2d767f1fae173b03ac7671: Status 404 returned error can't find the container with id 3604a1ed363f990559841eb45c533c06695cdf71dd2d767f1fae173b03ac7671 Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.664277 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:27 crc kubenswrapper[4804]: E0128 11:24:27.664668 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:28.164645404 +0000 UTC m=+143.959525388 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.676484 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.685316 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7"] Jan 28 11:24:27 crc kubenswrapper[4804]: W0128 11:24:27.689286 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9927b5d4_5460_4d78_9320_af3916443c1a.slice/crio-02704172dc594609b80c9268bfb11b50001e91da6865bacc890399924f19a673 WatchSource:0}: Error finding container 02704172dc594609b80c9268bfb11b50001e91da6865bacc890399924f19a673: Status 404 returned error can't find the container with id 02704172dc594609b80c9268bfb11b50001e91da6865bacc890399924f19a673 Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.694464 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ml79j"] Jan 28 11:24:27 crc kubenswrapper[4804]: W0128 11:24:27.702781 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc03ebf08_d5a0_48b4_a1ca_3eec30c14490.slice/crio-7a15e2abcf69e999422174e26539765b911bdfe5dd9b584fca04291a550b3900 WatchSource:0}: Error finding container 7a15e2abcf69e999422174e26539765b911bdfe5dd9b584fca04291a550b3900: Status 404 returned error can't find the container with id 7a15e2abcf69e999422174e26539765b911bdfe5dd9b584fca04291a550b3900 Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.716128 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5"] Jan 28 11:24:27 crc kubenswrapper[4804]: W0128 11:24:27.734456 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod456e451f_8bcc_49ad_a5e8_502c294e8518.slice/crio-14de5827055986a9f83a91574ec8f25b8df7aa6ccb7b9855e7f209eafd33035c WatchSource:0}: Error finding container 14de5827055986a9f83a91574ec8f25b8df7aa6ccb7b9855e7f209eafd33035c: Status 404 returned error can't find the container with id 14de5827055986a9f83a91574ec8f25b8df7aa6ccb7b9855e7f209eafd33035c Jan 28 11:24:27 crc kubenswrapper[4804]: W0128 11:24:27.741712 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a69cec6_e1b7_4e4d_88f7_de85e459ed7b.slice/crio-b518443f38ffb9f84b3dd2dfaab36dbbe5e370b2ef802d93d58b808e24eab16b WatchSource:0}: Error finding container b518443f38ffb9f84b3dd2dfaab36dbbe5e370b2ef802d93d58b808e24eab16b: Status 404 returned error can't find the container with id b518443f38ffb9f84b3dd2dfaab36dbbe5e370b2ef802d93d58b808e24eab16b Jan 28 11:24:27 crc kubenswrapper[4804]: W0128 11:24:27.752767 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50a5d490_28ef_438f_b03c_6b15d30bbb1e.slice/crio-29e66fe3c6299793226f4367e79e47af740c2de1f533d8d285fca540e08158f0 WatchSource:0}: Error finding container 29e66fe3c6299793226f4367e79e47af740c2de1f533d8d285fca540e08158f0: Status 404 returned error can't find the container with id 29e66fe3c6299793226f4367e79e47af740c2de1f533d8d285fca540e08158f0 Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.753527 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-gsq9d"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.765734 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:27 crc kubenswrapper[4804]: E0128 11:24:27.766709 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:28.266694228 +0000 UTC m=+144.061574212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.815070 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-97kr8" event={"ID":"8822fc2c-fece-436d-bd9b-d6ff2fbb72fb","Type":"ContainerStarted","Data":"9b139cb8da56097cf65f6612c2d1178228173d0d01df5245c5bd587c8df7da2b"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.815134 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-97kr8" event={"ID":"8822fc2c-fece-436d-bd9b-d6ff2fbb72fb","Type":"ContainerStarted","Data":"fb40ab307ce4a5caf1b231b75a7adfe6962821c8f2dc60b80dad372b90c37003"} Jan 28 11:24:27 crc kubenswrapper[4804]: W0128 11:24:27.818795 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd0c2686a_d8ed_4c34_8677_4371daf94ea4.slice/crio-fea7ef4cc668293eb4bf8b237f28d1966937276987d2224f86cc726aa1dafb22 WatchSource:0}: Error finding container fea7ef4cc668293eb4bf8b237f28d1966937276987d2224f86cc726aa1dafb22: Status 404 returned error can't find the container with id fea7ef4cc668293eb4bf8b237f28d1966937276987d2224f86cc726aa1dafb22 Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.820272 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cljd9" event={"ID":"4e425cf1-0352-47be-9c58-2bad27ccc3c1","Type":"ContainerStarted","Data":"3abf8095650166015744b0692bfb8d755bffd67750afd5ad77a5855bdeef339a"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.820331 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-cljd9" event={"ID":"4e425cf1-0352-47be-9c58-2bad27ccc3c1","Type":"ContainerStarted","Data":"6b9f1fb982c5257d07339926fb7a00a49338abd8cff67e05976b575df897f9c8"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.820463 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-cljd9" Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.822335 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" event={"ID":"ae7433f6-40cb-4caf-8356-10bb93645af5","Type":"ContainerStarted","Data":"c2ace65eb04ab5ff8b961ebdb9574c3959291d26b7237bb5bd982c03d8d46b22"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.826267 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" event={"ID":"ffe68ef2-471a-42e3-a825-f90c8a5f6028","Type":"ContainerStarted","Data":"4fb80761b670b818cbfc00dc4729aa0a1753e14f72dc1531e8c266e78f899b4f"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.826313 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" event={"ID":"ffe68ef2-471a-42e3-a825-f90c8a5f6028","Type":"ContainerStarted","Data":"634af5bd10abf62a71d59f87a2541399c5773f0d3f1a7a24e748550c9703eb7f"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.834615 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" event={"ID":"c4cdff00-d1aa-4535-b269-b692986cd76c","Type":"ContainerStarted","Data":"f32d30fb5d7e1b78710b5ddfc122e05c1e81e8eb099800479d1eeb44526b6665"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.835569 4804 patch_prober.go:28] interesting pod/downloads-7954f5f757-cljd9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.835633 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cljd9" podUID="4e425cf1-0352-47be-9c58-2bad27ccc3c1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.840305 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" event={"ID":"456e451f-8bcc-49ad-a5e8-502c294e8518","Type":"ContainerStarted","Data":"14de5827055986a9f83a91574ec8f25b8df7aa6ccb7b9855e7f209eafd33035c"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.880430 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vc78g"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.882726 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:27 crc kubenswrapper[4804]: E0128 11:24:27.885143 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:28.38511883 +0000 UTC m=+144.179998814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.891153 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" event={"ID":"f90c0f76-ca48-4b2f-89cc-b90cc1172576","Type":"ContainerStarted","Data":"624b2d06e831bacfc999ebb84f34b3e807fa2e1f141266c26470178331680693"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.903487 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" event={"ID":"57150906-6899-4d65-b5e5-5092215695b7","Type":"ContainerStarted","Data":"7d2183592100c3b755ea4b9a67254d7368c8e8a78e571f1c24b874621cdac80a"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.942840 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" event={"ID":"349fe87d-e741-4dc4-bc78-322b541e0a3f","Type":"ContainerStarted","Data":"21835e1f17594c29f6778f18a9d2cfa993b94d38a983da6a99362ca3804e3f5f"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.944820 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v" event={"ID":"9927b5d4-5460-4d78-9320-af3916443c1a","Type":"ContainerStarted","Data":"02704172dc594609b80c9268bfb11b50001e91da6865bacc890399924f19a673"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.948680 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-qj7pb"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.949244 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" event={"ID":"61b65dc4-6aaf-4578-adf4-64759773196a","Type":"ContainerStarted","Data":"6fd618208dcc10b2db5a27d6c2d88f9f7014f82e2d583cd0b03a3ec78020706a"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.953060 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6kll7" event={"ID":"61387edd-4fc9-4cb7-8229-a6578d2d15fb","Type":"ContainerStarted","Data":"83ab0c912a8a7d32eb665a95e871f12c36c294f768dbb5fcd25c4b56ddbd8f98"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.953120 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-6kll7" event={"ID":"61387edd-4fc9-4cb7-8229-a6578d2d15fb","Type":"ContainerStarted","Data":"4fd92ad5b38c409bbbb9f0a38c3e7e188bb19c1833a427b53349f676e9af8a2e"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.953699 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb"] Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.953998 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.957008 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" event={"ID":"bb959019-0f9d-4210-8410-6b3c00b02337","Type":"ContainerStarted","Data":"da180074ac3e1b702af197f95701d1cff294f3e8895503fdbfbde3d61d0ef87e"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.960130 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" event={"ID":"9d40e6f6-2a67-4ec3-a612-77c2f9f6517d","Type":"ContainerStarted","Data":"5602dfca664869f18078de3f18533cb7e4f55f023e1dfd1ba6fe6bcc81da1296"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.961272 4804 patch_prober.go:28] interesting pod/console-operator-58897d9998-6kll7 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" start-of-body= Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.961339 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-6kll7" podUID="61387edd-4fc9-4cb7-8229-a6578d2d15fb" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.23:8443/readyz\": dial tcp 10.217.0.23:8443: connect: connection refused" Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.968220 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" event={"ID":"0f90e352-ac01-40fb-bf8d-50500206f0ac","Type":"ContainerStarted","Data":"2fc249a3bb99b94948a6de7cc92b28d11421473128e5d99fc9e7af52b791c22f"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.970692 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" event={"ID":"625b312d-62b0-4965-966c-3605f4d649a4","Type":"ContainerStarted","Data":"040928f7f7783681cac1ca54191f2137bf69e3b7b5fd0b5aa33ffccd11b93cfe"} Jan 28 11:24:27 crc kubenswrapper[4804]: I0128 11:24:27.985380 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:27 crc kubenswrapper[4804]: E0128 11:24:27.987323 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:28.487300678 +0000 UTC m=+144.282180662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.001333 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" event={"ID":"50a5d490-28ef-438f-b03c-6b15d30bbb1e","Type":"ContainerStarted","Data":"29e66fe3c6299793226f4367e79e47af740c2de1f533d8d285fca540e08158f0"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.015277 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" event={"ID":"881a5709-4ff6-448e-ba75-caf5f7e61a5b","Type":"ContainerStarted","Data":"5cc033f5b7fdb4ed4c13410df4b7e5bac38b23c6c7b99e1ae758945871f6d9b7"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.018285 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-slln9" event={"ID":"9ad95836-c587-4ca7-b5fa-f878af1019b6","Type":"ContainerStarted","Data":"4797ba443515c885c8f4072498faa67fe63c348f02456ef6e8f5d9f118858289"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.028995 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" event={"ID":"2dce007c-8b8d-4271-bb40-7482176fc529","Type":"ContainerStarted","Data":"40b8ed310457bc0f00986ac6504595e3357a1c9be972ccf980b80a93508a837e"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.031248 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" event={"ID":"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b","Type":"ContainerStarted","Data":"b518443f38ffb9f84b3dd2dfaab36dbbe5e370b2ef802d93d58b808e24eab16b"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.033169 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-slcp9" event={"ID":"43de728c-beeb-4fde-832b-dcf5097867e0","Type":"ContainerStarted","Data":"0e322a333fb5eefa0f687550b5b99556748316711d7cbbeb522e3842b7256871"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.035747 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" event={"ID":"65cbbd20-6185-455b-814b-7de34194ec29","Type":"ContainerStarted","Data":"883b034a2889463138fade7b419ea017c2bfce371979299cd8f7a797a2683e63"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.041393 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-47m7l" podStartSLOduration=124.041372851 podStartE2EDuration="2m4.041372851s" podCreationTimestamp="2026-01-28 11:22:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:28.039897393 +0000 UTC m=+143.834777377" watchObservedRunningTime="2026-01-28 11:24:28.041372851 +0000 UTC m=+143.836252835" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.051204 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" event={"ID":"ab667a9d-5e0b-4faa-909e-5f778579e853","Type":"ContainerStarted","Data":"2c2bb69c0f630959a89190b34a64007b18f8c0bda2b19db0560598c1baaa8b7a"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.051719 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.054298 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" event={"ID":"e2b8b707-60c9-4138-a4d8-d218162737fe","Type":"ContainerStarted","Data":"64c5dc9b1e42c4e95788e781597b347ac0700676170c2c076bffecca9838cccf"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.064813 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" event={"ID":"c802cb06-d5ee-489c-aa2d-4dee5f3f2557","Type":"ContainerStarted","Data":"fe1f9a02caf21153db1bd567b5a8dd3b8d2a57a944d9bcf293a808b20081940f"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.064895 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" event={"ID":"c802cb06-d5ee-489c-aa2d-4dee5f3f2557","Type":"ContainerStarted","Data":"cffe7deccba04a98fba8c431ccb78fb720efb5536fc80dba3180146f85a85987"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.065397 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.074721 4804 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-z4j56 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.074809 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" podUID="c802cb06-d5ee-489c-aa2d-4dee5f3f2557" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.079363 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" event={"ID":"1a74db24-5aca-48f9-889c-e37d8cdba99e","Type":"ContainerStarted","Data":"c1faa4ff4a670d1fa673dfb9f4d02e027395370b2fbf081ec587f48043939450"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.084705 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" event={"ID":"d56b6530-c7d7-432d-bd5e-1a07a2d94515","Type":"ContainerStarted","Data":"5fd5c872cb044c160a4ff18aa2a4c6121bf64074f3146f4655562dbf6f1c2b4e"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.086648 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:28 crc kubenswrapper[4804]: E0128 11:24:28.088683 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:28.58865463 +0000 UTC m=+144.383534794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.093613 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xghdb" event={"ID":"bf13c867-7c3e-4845-a6c8-f25700c31666","Type":"ContainerStarted","Data":"39c6be7d2c6b604e29ab674e70547e5294e550d001aed4bfc7286a6d8fd167c8"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.115680 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" event={"ID":"a7c281fd-3e5a-4edc-98f7-8703c1f08aab","Type":"ContainerStarted","Data":"955e354432e1521f0faa580d1e71e4f71a73c5784df07b9bdb25cf52569e40c5"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.119368 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" event={"ID":"46da2b10-cba3-46fa-a2f3-972499966fd3","Type":"ContainerStarted","Data":"f8edde795f44d3c24d4992155087778dbf2413f6d87dca7b471cbe639efa2ffc"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.122760 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b" event={"ID":"c03ebf08-d5a0-48b4-a1ca-3eec30c14490","Type":"ContainerStarted","Data":"7a15e2abcf69e999422174e26539765b911bdfe5dd9b584fca04291a550b3900"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.136719 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" event={"ID":"33a8cd21-66e2-4a77-9596-ea6af6f4f2b3","Type":"ContainerStarted","Data":"2230dd1bd5b3c583e5a4b7a4c92a1eb57c941e41b51a2b6497c63e928058e888"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.140706 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" event={"ID":"5054f20f-444d-40e8-ad18-3515e1ff2638","Type":"ContainerStarted","Data":"3604a1ed363f990559841eb45c533c06695cdf71dd2d767f1fae173b03ac7671"} Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.159664 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" podStartSLOduration=123.159638127 podStartE2EDuration="2m3.159638127s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:28.158439727 +0000 UTC m=+143.953319711" watchObservedRunningTime="2026-01-28 11:24:28.159638127 +0000 UTC m=+143.954518111" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.190345 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:28 crc kubenswrapper[4804]: E0128 11:24:28.190815 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:28.690800299 +0000 UTC m=+144.485680283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.201321 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:28 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:28 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:28 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.201380 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.216511 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-6kll7" podStartSLOduration=123.216458959 podStartE2EDuration="2m3.216458959s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:28.206279995 +0000 UTC m=+144.001159979" watchObservedRunningTime="2026-01-28 11:24:28.216458959 +0000 UTC m=+144.011338943" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.248314 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-cljd9" podStartSLOduration=123.248283402 podStartE2EDuration="2m3.248283402s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:28.23783719 +0000 UTC m=+144.032717184" watchObservedRunningTime="2026-01-28 11:24:28.248283402 +0000 UTC m=+144.043163386" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.286488 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-z5n98" podStartSLOduration=123.286460633 podStartE2EDuration="2m3.286460633s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:28.283786885 +0000 UTC m=+144.078666870" watchObservedRunningTime="2026-01-28 11:24:28.286460633 +0000 UTC m=+144.081340617" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.299181 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:28 crc kubenswrapper[4804]: E0128 11:24:28.301196 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:28.801162035 +0000 UTC m=+144.596042019 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.359719 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" podStartSLOduration=122.359698134 podStartE2EDuration="2m2.359698134s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:28.317739088 +0000 UTC m=+144.112619072" watchObservedRunningTime="2026-01-28 11:24:28.359698134 +0000 UTC m=+144.154578118" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.360846 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" podStartSLOduration=123.360838021 podStartE2EDuration="2m3.360838021s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:28.359527968 +0000 UTC m=+144.154407952" watchObservedRunningTime="2026-01-28 11:24:28.360838021 +0000 UTC m=+144.155718015" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.402144 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:28 crc kubenswrapper[4804]: E0128 11:24:28.402590 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:28.902569509 +0000 UTC m=+144.697449493 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.442452 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" podStartSLOduration=123.442425296 podStartE2EDuration="2m3.442425296s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:28.436017835 +0000 UTC m=+144.230897829" watchObservedRunningTime="2026-01-28 11:24:28.442425296 +0000 UTC m=+144.237305300" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.443511 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-97kr8" podStartSLOduration=5.44349714 podStartE2EDuration="5.44349714s" podCreationTimestamp="2026-01-28 11:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:28.396070246 +0000 UTC m=+144.190950230" watchObservedRunningTime="2026-01-28 11:24:28.44349714 +0000 UTC m=+144.238377124" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.505483 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:28 crc kubenswrapper[4804]: E0128 11:24:28.505926 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:29.005905116 +0000 UTC m=+144.800785100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.608445 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:28 crc kubenswrapper[4804]: E0128 11:24:28.608841 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:29.108825539 +0000 UTC m=+144.903705523 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.668167 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.709730 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:28 crc kubenswrapper[4804]: E0128 11:24:28.710289 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:29.210260204 +0000 UTC m=+145.005140188 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.811220 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:28 crc kubenswrapper[4804]: E0128 11:24:28.812265 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:29.312245587 +0000 UTC m=+145.107125571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:28 crc kubenswrapper[4804]: I0128 11:24:28.915844 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:28 crc kubenswrapper[4804]: E0128 11:24:28.916357 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:29.416333538 +0000 UTC m=+145.211213522 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.018547 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:29 crc kubenswrapper[4804]: E0128 11:24:29.019044 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:29.519024774 +0000 UTC m=+145.313904758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.119095 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:29 crc kubenswrapper[4804]: E0128 11:24:29.120252 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:29.620227821 +0000 UTC m=+145.415107805 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.180338 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:29 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:29 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:29 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.180772 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.180561 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" event={"ID":"609fd77d-7c9e-4a3f-855f-8aca45b53f4d","Type":"ContainerStarted","Data":"7aefdfad6e14f5246d58518b9d8bbb54908b7a27f6e772137d3f8ac3ad1d170a"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.193687 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xghdb" event={"ID":"bf13c867-7c3e-4845-a6c8-f25700c31666","Type":"ContainerStarted","Data":"be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.205607 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-slcp9" event={"ID":"43de728c-beeb-4fde-832b-dcf5097867e0","Type":"ContainerStarted","Data":"1a82d7ed0c45205bb923cb3978bff5ea93d09c04290ad9c41169c28a19b74f1e"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.225493 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:29 crc kubenswrapper[4804]: E0128 11:24:29.227468 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:29.727449595 +0000 UTC m=+145.522329579 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.228184 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vc78g" event={"ID":"113634df-0b68-4670-8c3d-8d227c626095","Type":"ContainerStarted","Data":"1632bd26c2563dcc25789aed5bf9598cc88023eda6a5e61451e4daccac27db90"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.228252 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vc78g" event={"ID":"113634df-0b68-4670-8c3d-8d227c626095","Type":"ContainerStarted","Data":"e62b61efa71260f978b3c2c4a4dcf021a1e38d7ac4ce65a2c47defc605859bf0"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.236387 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" event={"ID":"d0c2686a-d8ed-4c34-8677-4371daf94ea4","Type":"ContainerStarted","Data":"fea7ef4cc668293eb4bf8b237f28d1966937276987d2224f86cc726aa1dafb22"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.251271 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" event={"ID":"33a8cd21-66e2-4a77-9596-ea6af6f4f2b3","Type":"ContainerStarted","Data":"303863c11d583a696588c0039ac79e38b7285884c92ccebde06b11b5313c072e"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.267195 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v" event={"ID":"9927b5d4-5460-4d78-9320-af3916443c1a","Type":"ContainerStarted","Data":"e214781e6dc1330c8de3e95aa27dff62bdfc02e93f36dcd1215852d3374f77d6"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.291573 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-xghdb" podStartSLOduration=124.291549155 podStartE2EDuration="2m4.291549155s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.289563101 +0000 UTC m=+145.084443105" watchObservedRunningTime="2026-01-28 11:24:29.291549155 +0000 UTC m=+145.086429139" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.311200 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" event={"ID":"61b65dc4-6aaf-4578-adf4-64759773196a","Type":"ContainerStarted","Data":"fb22c089271c5178d9c9e3aa5e583813de0002f1617ca7995a5a967788f6ab4f"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.329673 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:29 crc kubenswrapper[4804]: E0128 11:24:29.331474 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:29.831415962 +0000 UTC m=+145.626295946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.340560 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" event={"ID":"46da2b10-cba3-46fa-a2f3-972499966fd3","Type":"ContainerStarted","Data":"c52a93ee57d64c17d0c13644799fc0bc866276dac487ae35f364d3fbeb1299dd"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.352819 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-vc78g" podStartSLOduration=6.352801024 podStartE2EDuration="6.352801024s" podCreationTimestamp="2026-01-28 11:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.351923355 +0000 UTC m=+145.146803339" watchObservedRunningTime="2026-01-28 11:24:29.352801024 +0000 UTC m=+145.147681008" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.363380 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b" event={"ID":"c03ebf08-d5a0-48b4-a1ca-3eec30c14490","Type":"ContainerStarted","Data":"28776ff7a2258a3015a449e7086b23688da26cb504c3f194f4d3f166c257fb67"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.386061 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" event={"ID":"d56b6530-c7d7-432d-bd5e-1a07a2d94515","Type":"ContainerStarted","Data":"e29698ab75e7e02a47e44e9099d1296207853f543f444cd2dc63a10873278dc8"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.387445 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.391794 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-mtffd" podStartSLOduration=123.391778841 podStartE2EDuration="2m3.391778841s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.389867028 +0000 UTC m=+145.184747012" watchObservedRunningTime="2026-01-28 11:24:29.391778841 +0000 UTC m=+145.186658825" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.396239 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" event={"ID":"e2b8b707-60c9-4138-a4d8-d218162737fe","Type":"ContainerStarted","Data":"f97ba040ca3e9cdae62ae1b9da5d4c0d116d0ff1b57ce8d6552b481d29ed03b5"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.398768 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" event={"ID":"2dce007c-8b8d-4271-bb40-7482176fc529","Type":"ContainerStarted","Data":"2909fdd8821b315791fdc7eb7dfd42870ff8631f1128ff5ff5afa29847b31a79"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.409704 4804 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-wg94f container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.409761 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" podUID="d56b6530-c7d7-432d-bd5e-1a07a2d94515" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.11:8443/healthz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.415112 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-pgctg" event={"ID":"1a74db24-5aca-48f9-889c-e37d8cdba99e","Type":"ContainerStarted","Data":"9909541a8b644dd2bf68670a3a60926fe93dd81ce5f6279b0b220f551453eaa1"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.422002 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-slln9" event={"ID":"9ad95836-c587-4ca7-b5fa-f878af1019b6","Type":"ContainerStarted","Data":"c32d2306f10484bb9231398f0fa50ecc60495be47b9d9bad2777ab191bf51e53"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.432768 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.438292 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" event={"ID":"625b312d-62b0-4965-966c-3605f4d649a4","Type":"ContainerStarted","Data":"47d63a959066a85b93c282b5d64604c46b7467edaf9c84205538e338580ecc2d"} Jan 28 11:24:29 crc kubenswrapper[4804]: E0128 11:24:29.440188 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:29.940169867 +0000 UTC m=+145.735049851 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.440171 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-66fn9" podStartSLOduration=123.440146456 podStartE2EDuration="2m3.440146456s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.436573449 +0000 UTC m=+145.231453433" watchObservedRunningTime="2026-01-28 11:24:29.440146456 +0000 UTC m=+145.235026430" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.480759 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" event={"ID":"a7c281fd-3e5a-4edc-98f7-8703c1f08aab","Type":"ContainerStarted","Data":"aa7b3313d3e88ff5f3fdd3048e319f34a55e2d620ee8244d458903dc1287cf7a"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.482234 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" podStartSLOduration=123.482207534 podStartE2EDuration="2m3.482207534s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.473717616 +0000 UTC m=+145.268597620" watchObservedRunningTime="2026-01-28 11:24:29.482207534 +0000 UTC m=+145.277087518" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.498196 4804 generic.go:334] "Generic (PLEG): container finished" podID="881a5709-4ff6-448e-ba75-caf5f7e61a5b" containerID="3c30605d1847d4d33751ffcafc95368fecee3049a3e341e77df6c23f0a1fe3df" exitCode=0 Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.498468 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" event={"ID":"881a5709-4ff6-448e-ba75-caf5f7e61a5b","Type":"ContainerDied","Data":"3c30605d1847d4d33751ffcafc95368fecee3049a3e341e77df6c23f0a1fe3df"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.505223 4804 csr.go:261] certificate signing request csr-j9g9m is approved, waiting to be issued Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.516286 4804 csr.go:257] certificate signing request csr-j9g9m is issued Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.516956 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" event={"ID":"0f90e352-ac01-40fb-bf8d-50500206f0ac","Type":"ContainerStarted","Data":"e4684dc793c71b17d780522df34c599379cfb5bef8e16b4539106e9631eef623"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.519847 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-f822b" podStartSLOduration=123.519828378 podStartE2EDuration="2m3.519828378s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.51715591 +0000 UTC m=+145.312035894" watchObservedRunningTime="2026-01-28 11:24:29.519828378 +0000 UTC m=+145.314708362" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.530059 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" event={"ID":"f90c0f76-ca48-4b2f-89cc-b90cc1172576","Type":"ContainerStarted","Data":"523bc1964b4a6d866fc50bc6401fbc40295cf4a51484c2911b6363035e19f603"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.530776 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.543180 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:29 crc kubenswrapper[4804]: E0128 11:24:29.543585 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:30.043543685 +0000 UTC m=+145.838423669 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.552172 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.549665 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-vdpgq" podStartSLOduration=123.549643765 podStartE2EDuration="2m3.549643765s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.548153616 +0000 UTC m=+145.343033600" watchObservedRunningTime="2026-01-28 11:24:29.549643765 +0000 UTC m=+145.344523749" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.556059 4804 generic.go:334] "Generic (PLEG): container finished" podID="65cbbd20-6185-455b-814b-7de34194ec29" containerID="b2b8867f191301517831ca2e719c7a54282c377393208f4b794e328d9d6b3e3b" exitCode=0 Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.556114 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" event={"ID":"65cbbd20-6185-455b-814b-7de34194ec29","Type":"ContainerDied","Data":"b2b8867f191301517831ca2e719c7a54282c377393208f4b794e328d9d6b3e3b"} Jan 28 11:24:29 crc kubenswrapper[4804]: E0128 11:24:29.556453 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:30.056430778 +0000 UTC m=+145.851310762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.559761 4804 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-n9ds8 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" start-of-body= Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.569069 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" podUID="f90c0f76-ca48-4b2f-89cc-b90cc1172576" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.40:8443/healthz\": dial tcp 10.217.0.40:8443: connect: connection refused" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.592121 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" event={"ID":"6ba550eb-2fae-4448-9bc8-7c8ecf3de616","Type":"ContainerStarted","Data":"bdb87a4d01d1cdd6047b1a07d8021710f1cd0933374cbb22100d8df26605ca18"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.596327 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-hxdnv" podStartSLOduration=123.596283244 podStartE2EDuration="2m3.596283244s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.586046498 +0000 UTC m=+145.380926492" watchObservedRunningTime="2026-01-28 11:24:29.596283244 +0000 UTC m=+145.391163228" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.616210 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-slln9" podStartSLOduration=123.616184575 podStartE2EDuration="2m3.616184575s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.615751332 +0000 UTC m=+145.410631316" watchObservedRunningTime="2026-01-28 11:24:29.616184575 +0000 UTC m=+145.411064559" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.621205 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" event={"ID":"bb959019-0f9d-4210-8410-6b3c00b02337","Type":"ContainerStarted","Data":"4bceb0781d7092ea24802dae15015144fbb316afc1359d2ddad36759cb909c58"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.622329 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.630650 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" event={"ID":"9d40e6f6-2a67-4ec3-a612-77c2f9f6517d","Type":"ContainerStarted","Data":"0fb8b126c781040de7adf6bf7928d003c933d812455f61bb1fbf4fd106abe81d"} Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.635419 4804 patch_prober.go:28] interesting pod/downloads-7954f5f757-cljd9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.635460 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cljd9" podUID="4e425cf1-0352-47be-9c58-2bad27ccc3c1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.635595 4804 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-z4j56 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.635680 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" podUID="c802cb06-d5ee-489c-aa2d-4dee5f3f2557" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.647237 4804 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ml79j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.647303 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" podUID="bb959019-0f9d-4210-8410-6b3c00b02337" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.654494 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:29 crc kubenswrapper[4804]: E0128 11:24:29.657439 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:30.157410987 +0000 UTC m=+145.952290971 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.726584 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" podStartSLOduration=123.726559243 podStartE2EDuration="2m3.726559243s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.725973294 +0000 UTC m=+145.520853278" watchObservedRunningTime="2026-01-28 11:24:29.726559243 +0000 UTC m=+145.521439227" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.756813 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:29 crc kubenswrapper[4804]: E0128 11:24:29.762389 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:30.262363347 +0000 UTC m=+146.057243531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.854065 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" podStartSLOduration=123.854016172 podStartE2EDuration="2m3.854016172s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.839531096 +0000 UTC m=+145.634411080" watchObservedRunningTime="2026-01-28 11:24:29.854016172 +0000 UTC m=+145.648896166" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.855982 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mcdcv" podStartSLOduration=123.855971885 podStartE2EDuration="2m3.855971885s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:29.791616436 +0000 UTC m=+145.586496420" watchObservedRunningTime="2026-01-28 11:24:29.855971885 +0000 UTC m=+145.650851889" Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.866688 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:29 crc kubenswrapper[4804]: E0128 11:24:29.867129 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:30.36710689 +0000 UTC m=+146.161986874 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:29 crc kubenswrapper[4804]: I0128 11:24:29.971381 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:29 crc kubenswrapper[4804]: E0128 11:24:29.971740 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:30.471726138 +0000 UTC m=+146.266606122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.006469 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-6kll7" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.104505 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.105151 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:30.605127921 +0000 UTC m=+146.400007905 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.179558 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:30 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:30 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:30 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.179628 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.207004 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.207368 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:30.707354312 +0000 UTC m=+146.502234296 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.309511 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.311177 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:30.811139763 +0000 UTC m=+146.606019747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.414087 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.415076 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:30.91479228 +0000 UTC m=+146.709672264 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.515365 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.515514 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.015484121 +0000 UTC m=+146.810364105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.516041 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.516698 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.01666429 +0000 UTC m=+146.811544454 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.521964 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-28 11:19:29 +0000 UTC, rotation deadline is 2026-10-19 17:13:13.097212243 +0000 UTC Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.522012 4804 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 6341h48m42.575204638s for next certificate rotation Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.617239 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.117212575 +0000 UTC m=+146.912092559 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.617106 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.617782 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.618324 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.118311052 +0000 UTC m=+146.913191036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.675081 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" event={"ID":"5054f20f-444d-40e8-ad18-3515e1ff2638","Type":"ContainerStarted","Data":"79c1b2853938dfb6f36ee8ac10844c8a54c903878160557f1d100d526d8ed15d"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.677092 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.681099 4804 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-44lsd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.32:6443/healthz\": dial tcp 10.217.0.32:6443: connect: connection refused" start-of-body= Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.681172 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" podUID="5054f20f-444d-40e8-ad18-3515e1ff2638" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.32:6443/healthz\": dial tcp 10.217.0.32:6443: connect: connection refused" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.684226 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" event={"ID":"d0c2686a-d8ed-4c34-8677-4371daf94ea4","Type":"ContainerStarted","Data":"08e5f935d64639a4834695ccfdc8dd73b0b7645b5efe75f5bbc5d80e9f7742b5"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.684329 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" event={"ID":"d0c2686a-d8ed-4c34-8677-4371daf94ea4","Type":"ContainerStarted","Data":"21f445b12247db461476fb89d776ed5d90b41611135defcf774f096752b8ef72"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.695512 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" event={"ID":"349fe87d-e741-4dc4-bc78-322b541e0a3f","Type":"ContainerStarted","Data":"f29564c94936c996bc969fa08173301d4f19fb14727a22cc025b7ee06281451f"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.695920 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.700529 4804 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7ncgb container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" start-of-body= Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.700600 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" podUID="349fe87d-e741-4dc4-bc78-322b541e0a3f" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.27:8443/healthz\": dial tcp 10.217.0.27:8443: connect: connection refused" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.703686 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" event={"ID":"e2b8b707-60c9-4138-a4d8-d218162737fe","Type":"ContainerStarted","Data":"696c818d412dee542b594b3f7f141b8d7a946d118198acc75fa302924aa0633e"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.708779 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" event={"ID":"ae7433f6-40cb-4caf-8356-10bb93645af5","Type":"ContainerStarted","Data":"cc0257ab63b8ce14bac812eeb4ebcfe9baa7187c37d0e2df6e719355693b5895"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.717932 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" event={"ID":"456e451f-8bcc-49ad-a5e8-502c294e8518","Type":"ContainerStarted","Data":"165731136b6dba16f5966de2819861aff476cef91a2aa7d5362d75f28552dcec"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.721611 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.721867 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.221805544 +0000 UTC m=+147.016685518 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.721955 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.722383 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.222375702 +0000 UTC m=+147.017255686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.734580 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" event={"ID":"609fd77d-7c9e-4a3f-855f-8aca45b53f4d","Type":"ContainerStarted","Data":"1040ad252fc2679fb72d3973423667857921d187dc706253779751b8df30668b"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.739528 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" podStartSLOduration=124.739511303 podStartE2EDuration="2m4.739511303s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:30.739014647 +0000 UTC m=+146.533894631" watchObservedRunningTime="2026-01-28 11:24:30.739511303 +0000 UTC m=+146.534391287" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.749659 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" event={"ID":"3a69cec6-e1b7-4e4d-88f7-de85e459ed7b","Type":"ContainerStarted","Data":"ebe9e57661d63c96b8fd353a537d733c19e6e3d4e1c20d7c46889fbfaffc3d6b"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.750865 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.752535 4804 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-2xbh5 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" start-of-body= Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.752621 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" podUID="3a69cec6-e1b7-4e4d-88f7-de85e459ed7b" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.42:5443/healthz\": dial tcp 10.217.0.42:5443: connect: connection refused" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.768579 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" event={"ID":"65cbbd20-6185-455b-814b-7de34194ec29","Type":"ContainerStarted","Data":"50d11c9040bc9b6079b03d0b0e72d9153589f071266fdff2a811f4fe2d4d54c9"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.779965 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" event={"ID":"625b312d-62b0-4965-966c-3605f4d649a4","Type":"ContainerStarted","Data":"26ea0f4cae28c7f181de8c65eda71ec173d3b499e7e3eb9fc6402df34ccba462"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.790270 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" event={"ID":"6ba550eb-2fae-4448-9bc8-7c8ecf3de616","Type":"ContainerStarted","Data":"07c3cf02afa453704d9515a6c925e76a3631b521ac7192734fcad743048f1518"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.790341 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" event={"ID":"6ba550eb-2fae-4448-9bc8-7c8ecf3de616","Type":"ContainerStarted","Data":"8207bf0764fe6a56f9e52bcbcc7d4553ca54b56a34a082f3e7c9fc99daac07a5"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.807041 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gnlpm" podStartSLOduration=124.807022226 podStartE2EDuration="2m4.807022226s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:30.806016213 +0000 UTC m=+146.600896197" watchObservedRunningTime="2026-01-28 11:24:30.807022226 +0000 UTC m=+146.601902210" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.809251 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v" event={"ID":"9927b5d4-5460-4d78-9320-af3916443c1a","Type":"ContainerStarted","Data":"ca24be1a7427663c6a1037c4165004b43fdaf4ac41364af4082d0673b92250f2"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.822934 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" event={"ID":"50a5d490-28ef-438f-b03c-6b15d30bbb1e","Type":"ContainerStarted","Data":"6f31dd9e450b57f26593c55c9ee08606b75d55be1569497d2d751448a3e0a9f0"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.823004 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" event={"ID":"50a5d490-28ef-438f-b03c-6b15d30bbb1e","Type":"ContainerStarted","Data":"2fb2c501ae8507ec5241c44f77cec01654f34450e4d8b3de5187dd5ab29b3151"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.824543 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.827305 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.327255589 +0000 UTC m=+147.122135583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.842280 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" event={"ID":"881a5709-4ff6-448e-ba75-caf5f7e61a5b","Type":"ContainerStarted","Data":"bc8eaf7239940bfe6a5561c4bfaaa817ba0282ae7eb954fe8492921c08cb8d94"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.856554 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" event={"ID":"46da2b10-cba3-46fa-a2f3-972499966fd3","Type":"ContainerStarted","Data":"94f50d2fb3e1793d3de9237b4fc9967b585a6cb26ec523bfe417f73119ba6b2e"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.869996 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" event={"ID":"c4cdff00-d1aa-4535-b269-b692986cd76c","Type":"ContainerStarted","Data":"22a806fce53fd0eaa60c0d6286732b7e11b99ca72b6bdfbf096b80542aba1032"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.880000 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.880362 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.906725 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" podStartSLOduration=125.906705043 podStartE2EDuration="2m5.906705043s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:30.904856093 +0000 UTC m=+146.699736077" watchObservedRunningTime="2026-01-28 11:24:30.906705043 +0000 UTC m=+146.701585267" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.911933 4804 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-2kmn2 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.22:8443/livez\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.912006 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" podUID="881a5709-4ff6-448e-ba75-caf5f7e61a5b" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.22:8443/livez\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.912191 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-slcp9" event={"ID":"43de728c-beeb-4fde-832b-dcf5097867e0","Type":"ContainerStarted","Data":"0c0dc66ac9be66325e7e4dcbe6abef69c55b8943ba19d2dd4536dfc36ada4c0b"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.912685 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.929801 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:30 crc kubenswrapper[4804]: E0128 11:24:30.932352 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.432330433 +0000 UTC m=+147.227210627 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.969754 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-g8nn2" Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.969802 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" event={"ID":"a7c281fd-3e5a-4edc-98f7-8703c1f08aab","Type":"ContainerStarted","Data":"55976ebceac6834076b5e889919819c6e43328a0039fab2e0659540494c6445a"} Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.997290 4804 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ml79j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 28 11:24:30 crc kubenswrapper[4804]: I0128 11:24:30.997399 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" podUID="bb959019-0f9d-4210-8410-6b3c00b02337" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.023310 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-n9ds8" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.023540 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.034121 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-gsq9d" podStartSLOduration=125.034100019 podStartE2EDuration="2m5.034100019s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.029349343 +0000 UTC m=+146.824229327" watchObservedRunningTime="2026-01-28 11:24:31.034100019 +0000 UTC m=+146.828979993" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.035155 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" podStartSLOduration=125.035149974 podStartE2EDuration="2m5.035149974s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:30.982451716 +0000 UTC m=+146.777331720" watchObservedRunningTime="2026-01-28 11:24:31.035149974 +0000 UTC m=+146.830029958" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.036743 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.040434 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.540397935 +0000 UTC m=+147.335277919 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.054904 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.059088 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.559066967 +0000 UTC m=+147.353946951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.119515 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-m5p7p" podStartSLOduration=125.119484027 podStartE2EDuration="2m5.119484027s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.118141413 +0000 UTC m=+146.913021407" watchObservedRunningTime="2026-01-28 11:24:31.119484027 +0000 UTC m=+146.914364011" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.157763 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.160173 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.66013916 +0000 UTC m=+147.455019144 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.175066 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-8m8b7" podStartSLOduration=125.175020378 podStartE2EDuration="2m5.175020378s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.171513173 +0000 UTC m=+146.966393157" watchObservedRunningTime="2026-01-28 11:24:31.175020378 +0000 UTC m=+146.969900362" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.187870 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:31 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:31 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:31 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.187968 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.211900 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-6g5ff" podStartSLOduration=125.211864535 podStartE2EDuration="2m5.211864535s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.208173484 +0000 UTC m=+147.003053468" watchObservedRunningTime="2026-01-28 11:24:31.211864535 +0000 UTC m=+147.006744519" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.260029 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.260357 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.760343974 +0000 UTC m=+147.555223958 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.301501 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-mmdfp" podStartSLOduration=126.301473502 podStartE2EDuration="2m6.301473502s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.298698571 +0000 UTC m=+147.093578555" watchObservedRunningTime="2026-01-28 11:24:31.301473502 +0000 UTC m=+147.096353486" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.363562 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.364716 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.864691884 +0000 UTC m=+147.659571868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.383434 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-v79mb" podStartSLOduration=125.383405737 podStartE2EDuration="2m5.383405737s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.339398765 +0000 UTC m=+147.134278769" watchObservedRunningTime="2026-01-28 11:24:31.383405737 +0000 UTC m=+147.178285721" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.384062 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-8rkrw" podStartSLOduration=125.384051109 podStartE2EDuration="2m5.384051109s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.376577854 +0000 UTC m=+147.171457828" watchObservedRunningTime="2026-01-28 11:24:31.384051109 +0000 UTC m=+147.178931123" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.419892 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-slcp9" podStartSLOduration=8.419848602 podStartE2EDuration="8.419848602s" podCreationTimestamp="2026-01-28 11:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.414433234 +0000 UTC m=+147.209313218" watchObservedRunningTime="2026-01-28 11:24:31.419848602 +0000 UTC m=+147.214728596" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.467795 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.468137 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:31.968121694 +0000 UTC m=+147.763001678 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.518955 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" podStartSLOduration=126.518926739 podStartE2EDuration="2m6.518926739s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.517988928 +0000 UTC m=+147.312868912" watchObservedRunningTime="2026-01-28 11:24:31.518926739 +0000 UTC m=+147.313806753" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.573413 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.573786 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.073767176 +0000 UTC m=+147.868647160 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.612836 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" podStartSLOduration=125.612794815 podStartE2EDuration="2m5.612794815s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.612234757 +0000 UTC m=+147.407114741" watchObservedRunningTime="2026-01-28 11:24:31.612794815 +0000 UTC m=+147.407674799" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.675349 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.676260 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.176239216 +0000 UTC m=+147.971119190 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.688510 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" podStartSLOduration=125.688491557 podStartE2EDuration="2m5.688491557s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.647421971 +0000 UTC m=+147.442301955" watchObservedRunningTime="2026-01-28 11:24:31.688491557 +0000 UTC m=+147.483371541" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.776542 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.777252 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.277235436 +0000 UTC m=+148.072115420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.878762 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.879523 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.879723 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.879920 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.37990196 +0000 UTC m=+148.174781944 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.948210 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" event={"ID":"65cbbd20-6185-455b-814b-7de34194ec29","Type":"ContainerStarted","Data":"1e7f73bb71919aa179c2c8e1a5de137b2caec6edbe00500cb701c732c3a9e8ce"} Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.949479 4804 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-ml79j container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" start-of-body= Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.949514 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" podUID="bb959019-0f9d-4210-8410-6b3c00b02337" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.36:8080/healthz\": dial tcp 10.217.0.36:8080: connect: connection refused" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.971757 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7ncgb" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.980984 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.981307 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.481262743 +0000 UTC m=+148.276142727 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.981447 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.981687 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.981857 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.982007 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:31 crc kubenswrapper[4804]: E0128 11:24:31.982556 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.482539084 +0000 UTC m=+148.277419108 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:31 crc kubenswrapper[4804]: I0128 11:24:31.991920 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.000599 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.001722 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-qdn6v" podStartSLOduration=126.001711832 podStartE2EDuration="2m6.001711832s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:31.690380869 +0000 UTC m=+147.485260863" watchObservedRunningTime="2026-01-28 11:24:32.001711832 +0000 UTC m=+147.796591816" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.002191 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" podStartSLOduration=126.002186748 podStartE2EDuration="2m6.002186748s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:32.001130474 +0000 UTC m=+147.796010448" watchObservedRunningTime="2026-01-28 11:24:32.002186748 +0000 UTC m=+147.797066732" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.003846 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.089395 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:32 crc kubenswrapper[4804]: E0128 11:24:32.091040 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.591017919 +0000 UTC m=+148.385897903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.166078 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.177145 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.178388 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gw5tb"] Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.179551 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.189163 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.191321 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.192133 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:32 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:32 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:32 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.192193 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.192662 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:32 crc kubenswrapper[4804]: E0128 11:24:32.205968 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.705946916 +0000 UTC m=+148.500826900 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.249172 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gw5tb"] Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.294300 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.294563 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-utilities\") pod \"certified-operators-gw5tb\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.294641 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wms6l\" (UniqueName: \"kubernetes.io/projected/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-kube-api-access-wms6l\") pod \"certified-operators-gw5tb\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.294676 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-catalog-content\") pod \"certified-operators-gw5tb\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:24:32 crc kubenswrapper[4804]: E0128 11:24:32.294764 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.794749958 +0000 UTC m=+148.589629942 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.357101 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-2xbh5" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.395613 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wms6l\" (UniqueName: \"kubernetes.io/projected/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-kube-api-access-wms6l\") pod \"certified-operators-gw5tb\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.395672 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-catalog-content\") pod \"certified-operators-gw5tb\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.395712 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-utilities\") pod \"certified-operators-gw5tb\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.395754 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:32 crc kubenswrapper[4804]: E0128 11:24:32.396041 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.896028847 +0000 UTC m=+148.690908831 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.396718 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-catalog-content\") pod \"certified-operators-gw5tb\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.397011 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-utilities\") pod \"certified-operators-gw5tb\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.448585 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hzmvb"] Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.453577 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.493558 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hzmvb"] Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.497390 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:32 crc kubenswrapper[4804]: E0128 11:24:32.497531 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.997510993 +0000 UTC m=+148.792390977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.497668 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-catalog-content\") pod \"community-operators-hzmvb\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.497725 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdvxr\" (UniqueName: \"kubernetes.io/projected/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-kube-api-access-zdvxr\") pod \"community-operators-hzmvb\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.497755 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.497821 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-utilities\") pod \"community-operators-hzmvb\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:24:32 crc kubenswrapper[4804]: E0128 11:24:32.498141 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:32.998133363 +0000 UTC m=+148.793013347 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.499695 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wms6l\" (UniqueName: \"kubernetes.io/projected/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-kube-api-access-wms6l\") pod \"certified-operators-gw5tb\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.510034 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.548808 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.599598 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.599932 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-catalog-content\") pod \"community-operators-hzmvb\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.599979 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdvxr\" (UniqueName: \"kubernetes.io/projected/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-kube-api-access-zdvxr\") pod \"community-operators-hzmvb\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.600050 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-utilities\") pod \"community-operators-hzmvb\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.600531 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-utilities\") pod \"community-operators-hzmvb\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:24:32 crc kubenswrapper[4804]: E0128 11:24:32.600615 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:33.100595121 +0000 UTC m=+148.895475105 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.600861 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-catalog-content\") pod \"community-operators-hzmvb\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.662328 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-48gg7"] Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.669985 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.700688 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdvxr\" (UniqueName: \"kubernetes.io/projected/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-kube-api-access-zdvxr\") pod \"community-operators-hzmvb\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.701353 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:32 crc kubenswrapper[4804]: E0128 11:24:32.701628 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:33.201612492 +0000 UTC m=+148.996492466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.737355 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-48gg7"] Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.779196 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.811503 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.811735 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6sc7\" (UniqueName: \"kubernetes.io/projected/23f32834-88e4-454d-81fe-6370a2bc8e0b-kube-api-access-l6sc7\") pod \"certified-operators-48gg7\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.811797 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-utilities\") pod \"certified-operators-48gg7\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.811822 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-catalog-content\") pod \"certified-operators-48gg7\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:24:32 crc kubenswrapper[4804]: E0128 11:24:32.811981 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:33.311965019 +0000 UTC m=+149.106845003 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.830510 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-kvdtx"] Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.831720 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.886645 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kvdtx"] Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.917098 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wwt7\" (UniqueName: \"kubernetes.io/projected/4ad471e3-4346-4464-94bf-778299801fe4-kube-api-access-9wwt7\") pod \"community-operators-kvdtx\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.917636 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-utilities\") pod \"certified-operators-48gg7\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.917732 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-catalog-content\") pod \"certified-operators-48gg7\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.917816 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-utilities\") pod \"community-operators-kvdtx\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.917950 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.918026 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6sc7\" (UniqueName: \"kubernetes.io/projected/23f32834-88e4-454d-81fe-6370a2bc8e0b-kube-api-access-l6sc7\") pod \"certified-operators-48gg7\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.918104 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-catalog-content\") pod \"community-operators-kvdtx\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.918687 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-utilities\") pod \"certified-operators-48gg7\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.919033 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-catalog-content\") pod \"certified-operators-48gg7\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:24:32 crc kubenswrapper[4804]: E0128 11:24:32.919433 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:33.419416831 +0000 UTC m=+149.214296815 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.954995 4804 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-44lsd container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.32:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.955211 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" podUID="5054f20f-444d-40e8-ad18-3515e1ff2638" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.32:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 28 11:24:32 crc kubenswrapper[4804]: I0128 11:24:32.972844 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6sc7\" (UniqueName: \"kubernetes.io/projected/23f32834-88e4-454d-81fe-6370a2bc8e0b-kube-api-access-l6sc7\") pod \"certified-operators-48gg7\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.001384 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.002099 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.010795 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.011099 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.020805 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.021191 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-utilities\") pod \"community-operators-kvdtx\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.021263 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-catalog-content\") pod \"community-operators-kvdtx\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.021311 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wwt7\" (UniqueName: \"kubernetes.io/projected/4ad471e3-4346-4464-94bf-778299801fe4-kube-api-access-9wwt7\") pod \"community-operators-kvdtx\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:24:33 crc kubenswrapper[4804]: E0128 11:24:33.021658 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:33.521617751 +0000 UTC m=+149.316497735 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.022294 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-utilities\") pod \"community-operators-kvdtx\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.022464 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-catalog-content\") pod \"community-operators-kvdtx\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.048294 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.054339 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.079413 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wwt7\" (UniqueName: \"kubernetes.io/projected/4ad471e3-4346-4464-94bf-778299801fe4-kube-api-access-9wwt7\") pod \"community-operators-kvdtx\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.125649 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.125871 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"24e7f4b9-abfc-4b9b-929b-1288abb63cc2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.125914 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"24e7f4b9-abfc-4b9b-929b-1288abb63cc2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 11:24:33 crc kubenswrapper[4804]: E0128 11:24:33.126350 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:33.626337453 +0000 UTC m=+149.421217437 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.175666 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" event={"ID":"609fd77d-7c9e-4a3f-855f-8aca45b53f4d","Type":"ContainerStarted","Data":"79c972b68dd0407ad190f5e389c998aa8f50ba7e67254fb24302fd6cf0cfe94b"} Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.175718 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" event={"ID":"609fd77d-7c9e-4a3f-855f-8aca45b53f4d","Type":"ContainerStarted","Data":"c72a9012ebef3c27109c781d40de661745812bdc4a5532f3b04d5473d5d61e2a"} Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.193218 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:33 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:33 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:33 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.193286 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.206113 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.221733 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.235088 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.235687 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"24e7f4b9-abfc-4b9b-929b-1288abb63cc2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.235727 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"24e7f4b9-abfc-4b9b-929b-1288abb63cc2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.275831 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"24e7f4b9-abfc-4b9b-929b-1288abb63cc2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 11:24:33 crc kubenswrapper[4804]: E0128 11:24:33.293092 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:33.793062017 +0000 UTC m=+149.587942001 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.346376 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:33 crc kubenswrapper[4804]: E0128 11:24:33.349168 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:33.849150716 +0000 UTC m=+149.644030700 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.361187 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"24e7f4b9-abfc-4b9b-929b-1288abb63cc2\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.438171 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.449064 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:33 crc kubenswrapper[4804]: E0128 11:24:33.449642 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:33.949616178 +0000 UTC m=+149.744496162 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.550964 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:33 crc kubenswrapper[4804]: E0128 11:24:33.551473 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:34.051455767 +0000 UTC m=+149.846335751 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:33 crc kubenswrapper[4804]: W0128 11:24:33.622302 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-1930818bca9aae618d0ce8587923062cb2efdff40965af94afde97f68ede81fb WatchSource:0}: Error finding container 1930818bca9aae618d0ce8587923062cb2efdff40965af94afde97f68ede81fb: Status 404 returned error can't find the container with id 1930818bca9aae618d0ce8587923062cb2efdff40965af94afde97f68ede81fb Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.656850 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:33 crc kubenswrapper[4804]: E0128 11:24:33.657809 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:34.157783622 +0000 UTC m=+149.952663606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.758496 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:33 crc kubenswrapper[4804]: E0128 11:24:33.758823 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:34.258811283 +0000 UTC m=+150.053691257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.863363 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:33 crc kubenswrapper[4804]: E0128 11:24:33.863772 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:34.363749752 +0000 UTC m=+150.158629726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.915936 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gw5tb"] Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.931573 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-kvdtx"] Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.965900 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:33 crc kubenswrapper[4804]: E0128 11:24:33.966220 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:34.46620891 +0000 UTC m=+150.261088884 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:33 crc kubenswrapper[4804]: I0128 11:24:33.995130 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hzmvb"] Jan 28 11:24:34 crc kubenswrapper[4804]: W0128 11:24:34.042393 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3a8d8bca_1ae3_44d1_9793_29fc2a2f5e8d.slice/crio-ec04856dfe2459bdae75866159a6a9081b3f707d9e9a839eb94cb2acf0e4e3d1 WatchSource:0}: Error finding container ec04856dfe2459bdae75866159a6a9081b3f707d9e9a839eb94cb2acf0e4e3d1: Status 404 returned error can't find the container with id ec04856dfe2459bdae75866159a6a9081b3f707d9e9a839eb94cb2acf0e4e3d1 Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.068679 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:34 crc kubenswrapper[4804]: E0128 11:24:34.069048 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:34.56902683 +0000 UTC m=+150.363906814 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.177700 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:34 crc kubenswrapper[4804]: E0128 11:24:34.178041 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:34.678028343 +0000 UTC m=+150.472908327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.180600 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzmvb" event={"ID":"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d","Type":"ContainerStarted","Data":"ec04856dfe2459bdae75866159a6a9081b3f707d9e9a839eb94cb2acf0e4e3d1"} Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.188847 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:34 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:34 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:34 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.188978 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.204446 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"acda3dbec85d3fa0ee275eb6129c2c813f4e3844d964668a69753eb930b9adf5"} Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.204507 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"c4a9cf468c78cf8a7077fb201a18268b7f6871a942cd86bef6a0a81214408c7a"} Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.244958 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"e961f35e039ee65dcb5d21f6c328c81255264a7e828ea30c312b2947d5d33dff"} Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.245009 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1930818bca9aae618d0ce8587923062cb2efdff40965af94afde97f68ede81fb"} Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.258402 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvdtx" event={"ID":"4ad471e3-4346-4464-94bf-778299801fe4","Type":"ContainerStarted","Data":"f2d704f75cce250d039d0dd04e24016c6014cdedb092df3fd7df1955f57ab50a"} Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.279695 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:34 crc kubenswrapper[4804]: E0128 11:24:34.280383 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:34.780355826 +0000 UTC m=+150.575235840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.309728 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-48gg7"] Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.324245 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" event={"ID":"609fd77d-7c9e-4a3f-855f-8aca45b53f4d","Type":"ContainerStarted","Data":"fe87ae707136fde84e732e2eb14bec7211f95794ae9505a267289bdc0f27bdfc"} Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.347267 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"99427eca64f1fb6963924a5df595bed5dcf9ca2e6752fe7aa27983447bc5452a"} Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.347328 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"ea4b5ed441b759e15e596e88595a84ee67c7e289506e79699b2d4de856083eb0"} Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.348159 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.352741 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw5tb" event={"ID":"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d","Type":"ContainerStarted","Data":"60c5c3bae740bf47c18e8908e6f28f0a1a7fe1ff6bab40703594d2789651297c"} Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.387183 4804 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.387997 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:34 crc kubenswrapper[4804]: E0128 11:24:34.389407 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:34.8893926 +0000 UTC m=+150.684272584 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.394990 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-qj7pb" podStartSLOduration=11.394955763 podStartE2EDuration="11.394955763s" podCreationTimestamp="2026-01-28 11:24:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:34.374648527 +0000 UTC m=+150.169528511" watchObservedRunningTime="2026-01-28 11:24:34.394955763 +0000 UTC m=+150.189835747" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.488639 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:34 crc kubenswrapper[4804]: E0128 11:24:34.490290 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:34.990270606 +0000 UTC m=+150.785150590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.530830 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.562425 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9b7c6"] Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.563494 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.571710 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.591054 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:34 crc kubenswrapper[4804]: E0128 11:24:34.591464 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:35.091448892 +0000 UTC m=+150.886328876 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.596281 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b7c6"] Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.693142 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:34 crc kubenswrapper[4804]: E0128 11:24:34.693305 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:35.19327074 +0000 UTC m=+150.988150724 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.693553 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-catalog-content\") pod \"redhat-marketplace-9b7c6\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.693650 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.693696 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-utilities\") pod \"redhat-marketplace-9b7c6\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.693740 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qz4p\" (UniqueName: \"kubernetes.io/projected/6caae643-ab85-4628-bcb1-9c0ecc48c568-kube-api-access-4qz4p\") pod \"redhat-marketplace-9b7c6\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:24:34 crc kubenswrapper[4804]: E0128 11:24:34.694245 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:35.194228572 +0000 UTC m=+150.989108556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.795501 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.795757 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-utilities\") pod \"redhat-marketplace-9b7c6\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.795800 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qz4p\" (UniqueName: \"kubernetes.io/projected/6caae643-ab85-4628-bcb1-9c0ecc48c568-kube-api-access-4qz4p\") pod \"redhat-marketplace-9b7c6\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.795825 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-catalog-content\") pod \"redhat-marketplace-9b7c6\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.796253 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-catalog-content\") pod \"redhat-marketplace-9b7c6\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:24:34 crc kubenswrapper[4804]: E0128 11:24:34.796326 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:35.296311657 +0000 UTC m=+151.091191641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.796534 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-utilities\") pod \"redhat-marketplace-9b7c6\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.820734 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qz4p\" (UniqueName: \"kubernetes.io/projected/6caae643-ab85-4628-bcb1-9c0ecc48c568-kube-api-access-4qz4p\") pod \"redhat-marketplace-9b7c6\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.888875 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.900611 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:34 crc kubenswrapper[4804]: E0128 11:24:34.901106 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:35.401086721 +0000 UTC m=+151.195966765 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.968727 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4842n"] Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.970217 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:24:34 crc kubenswrapper[4804]: I0128 11:24:34.983630 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4842n"] Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.026595 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.027404 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-catalog-content\") pod \"redhat-marketplace-4842n\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.027442 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-utilities\") pod \"redhat-marketplace-4842n\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.027521 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4gn5v\" (UniqueName: \"kubernetes.io/projected/ac859130-1b71-4993-ab3d-66600459a32a-kube-api-access-4gn5v\") pod \"redhat-marketplace-4842n\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:24:35 crc kubenswrapper[4804]: E0128 11:24:35.027972 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-28 11:24:35.527945379 +0000 UTC m=+151.322825363 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.137383 4804 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-28T11:24:34.387207028Z","Handler":null,"Name":""} Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.139193 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.139220 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4gn5v\" (UniqueName: \"kubernetes.io/projected/ac859130-1b71-4993-ab3d-66600459a32a-kube-api-access-4gn5v\") pod \"redhat-marketplace-4842n\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.139274 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-catalog-content\") pod \"redhat-marketplace-4842n\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.139293 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-utilities\") pod \"redhat-marketplace-4842n\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.139678 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-utilities\") pod \"redhat-marketplace-4842n\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:24:35 crc kubenswrapper[4804]: E0128 11:24:35.139924 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-28 11:24:35.639913338 +0000 UTC m=+151.434793322 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-src4s" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.140402 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-catalog-content\") pod \"redhat-marketplace-4842n\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.156020 4804 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.156056 4804 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.172055 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4gn5v\" (UniqueName: \"kubernetes.io/projected/ac859130-1b71-4993-ab3d-66600459a32a-kube-api-access-4gn5v\") pod \"redhat-marketplace-4842n\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.181123 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:35 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:35 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:35 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.181168 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.242417 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.279276 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.322187 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b7c6"] Jan 28 11:24:35 crc kubenswrapper[4804]: W0128 11:24:35.330052 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6caae643_ab85_4628_bcb1_9c0ecc48c568.slice/crio-1e8bd873fb7adcc76814d0eeeb9b78c6d6981cbd42db2825d7cfc8757dac3b5e WatchSource:0}: Error finding container 1e8bd873fb7adcc76814d0eeeb9b78c6d6981cbd42db2825d7cfc8757dac3b5e: Status 404 returned error can't find the container with id 1e8bd873fb7adcc76814d0eeeb9b78c6d6981cbd42db2825d7cfc8757dac3b5e Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.343562 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.348366 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jmw4q"] Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.348949 4804 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.348993 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.350001 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.354349 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.362047 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jmw4q"] Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.369645 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.383565 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"24e7f4b9-abfc-4b9b-929b-1288abb63cc2","Type":"ContainerStarted","Data":"9f287564e4a6e36a41c42ef4b3439552b58a68748c72b11144f6fcfbe9b02cd5"} Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.383609 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"24e7f4b9-abfc-4b9b-929b-1288abb63cc2","Type":"ContainerStarted","Data":"07feb2bd10defcd28c969c63a5ad2b4221cd779c52d294146538a8b53582f860"} Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.396478 4804 generic.go:334] "Generic (PLEG): container finished" podID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" containerID="db5a8c39a47288e2a3d1bd3ec1f9d3852f734582ba3c66bbf5dc81f8dd6799e1" exitCode=0 Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.396538 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw5tb" event={"ID":"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d","Type":"ContainerDied","Data":"db5a8c39a47288e2a3d1bd3ec1f9d3852f734582ba3c66bbf5dc81f8dd6799e1"} Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.398240 4804 generic.go:334] "Generic (PLEG): container finished" podID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" containerID="224ba74fdc92a764e31b68f322cd68766ad88b0938c015d6c3219ec78f441a34" exitCode=0 Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.398284 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzmvb" event={"ID":"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d","Type":"ContainerDied","Data":"224ba74fdc92a764e31b68f322cd68766ad88b0938c015d6c3219ec78f441a34"} Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.400720 4804 generic.go:334] "Generic (PLEG): container finished" podID="4ad471e3-4346-4464-94bf-778299801fe4" containerID="46ebddf77e338edea495290c557790d95f2de2df53a4e7134b3e39d453fa17af" exitCode=0 Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.400764 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvdtx" event={"ID":"4ad471e3-4346-4464-94bf-778299801fe4","Type":"ContainerDied","Data":"46ebddf77e338edea495290c557790d95f2de2df53a4e7134b3e39d453fa17af"} Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.421212 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-src4s\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.424589 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=3.424550568 podStartE2EDuration="3.424550568s" podCreationTimestamp="2026-01-28 11:24:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:35.409606928 +0000 UTC m=+151.204486912" watchObservedRunningTime="2026-01-28 11:24:35.424550568 +0000 UTC m=+151.219430552" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.429523 4804 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.435953 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b7c6" event={"ID":"6caae643-ab85-4628-bcb1-9c0ecc48c568","Type":"ContainerStarted","Data":"1e8bd873fb7adcc76814d0eeeb9b78c6d6981cbd42db2825d7cfc8757dac3b5e"} Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.439215 4804 generic.go:334] "Generic (PLEG): container finished" podID="23f32834-88e4-454d-81fe-6370a2bc8e0b" containerID="3ebc683a6a62cde177d2a384e6ed4541311004d2bb8f30a4d59923d6c4003f98" exitCode=0 Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.441419 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48gg7" event={"ID":"23f32834-88e4-454d-81fe-6370a2bc8e0b","Type":"ContainerDied","Data":"3ebc683a6a62cde177d2a384e6ed4541311004d2bb8f30a4d59923d6c4003f98"} Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.441521 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48gg7" event={"ID":"23f32834-88e4-454d-81fe-6370a2bc8e0b","Type":"ContainerStarted","Data":"306a58f4bdfd74cc31f69b2bdc88525986d7ff5e31a732a9de2866902df8686e"} Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.451832 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-catalog-content\") pod \"redhat-operators-jmw4q\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.452457 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-utilities\") pod \"redhat-operators-jmw4q\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.452521 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dklnt\" (UniqueName: \"kubernetes.io/projected/b641b655-0d3e-4838-8c87-fc72873f1944-kube-api-access-dklnt\") pod \"redhat-operators-jmw4q\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.550061 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-nw6s2"] Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.551380 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.555369 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-catalog-content\") pod \"redhat-operators-jmw4q\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.555860 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-utilities\") pod \"redhat-operators-jmw4q\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.555898 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dklnt\" (UniqueName: \"kubernetes.io/projected/b641b655-0d3e-4838-8c87-fc72873f1944-kube-api-access-dklnt\") pod \"redhat-operators-jmw4q\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.561335 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-catalog-content\") pod \"redhat-operators-jmw4q\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.561635 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-utilities\") pod \"redhat-operators-jmw4q\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.567029 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nw6s2"] Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.590215 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dklnt\" (UniqueName: \"kubernetes.io/projected/b641b655-0d3e-4838-8c87-fc72873f1944-kube-api-access-dklnt\") pod \"redhat-operators-jmw4q\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.659504 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9n2zg\" (UniqueName: \"kubernetes.io/projected/759bdf85-0cca-46db-8126-fab61a8664a8-kube-api-access-9n2zg\") pod \"redhat-operators-nw6s2\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.659625 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-catalog-content\") pod \"redhat-operators-nw6s2\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.659666 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-utilities\") pod \"redhat-operators-nw6s2\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.689029 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4842n"] Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.700228 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.709375 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:35 crc kubenswrapper[4804]: E0128 11:24:35.713002 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod24e7f4b9_abfc_4b9b_929b_1288abb63cc2.slice/crio-9f287564e4a6e36a41c42ef4b3439552b58a68748c72b11144f6fcfbe9b02cd5.scope\": RecentStats: unable to find data in memory cache]" Jan 28 11:24:35 crc kubenswrapper[4804]: W0128 11:24:35.737512 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podac859130_1b71_4993_ab3d_66600459a32a.slice/crio-035e3af2de47d3eb7c5ac704bd99c258ff7c2e427ae366507d1020ab4549c195 WatchSource:0}: Error finding container 035e3af2de47d3eb7c5ac704bd99c258ff7c2e427ae366507d1020ab4549c195: Status 404 returned error can't find the container with id 035e3af2de47d3eb7c5ac704bd99c258ff7c2e427ae366507d1020ab4549c195 Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.761923 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9n2zg\" (UniqueName: \"kubernetes.io/projected/759bdf85-0cca-46db-8126-fab61a8664a8-kube-api-access-9n2zg\") pod \"redhat-operators-nw6s2\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.762044 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-catalog-content\") pod \"redhat-operators-nw6s2\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.762076 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-utilities\") pod \"redhat-operators-nw6s2\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.762686 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-utilities\") pod \"redhat-operators-nw6s2\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.762870 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-catalog-content\") pod \"redhat-operators-nw6s2\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.792310 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9n2zg\" (UniqueName: \"kubernetes.io/projected/759bdf85-0cca-46db-8126-fab61a8664a8-kube-api-access-9n2zg\") pod \"redhat-operators-nw6s2\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.843059 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.884131 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.895710 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-2kmn2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.929669 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.930328 4804 patch_prober.go:28] interesting pod/downloads-7954f5f757-cljd9 container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.930337 4804 patch_prober.go:28] interesting pod/downloads-7954f5f757-cljd9 container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" start-of-body= Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.930375 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-cljd9" podUID="4e425cf1-0352-47be-9c58-2bad27ccc3c1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.930408 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-cljd9" podUID="4e425cf1-0352-47be-9c58-2bad27ccc3c1" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.29:8080/\": dial tcp 10.217.0.29:8080: connect: connection refused" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.941012 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.941828 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.958343 4804 patch_prober.go:28] interesting pod/console-f9d7485db-xghdb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.958405 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-xghdb" podUID="bf13c867-7c3e-4845-a6c8-f25700c31666" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.984684 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:35 crc kubenswrapper[4804]: I0128 11:24:35.984723 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.020495 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jmw4q"] Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.027574 4804 patch_prober.go:28] interesting pod/apiserver-76f77b778f-vbjk6 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 28 11:24:36 crc kubenswrapper[4804]: [+]log ok Jan 28 11:24:36 crc kubenswrapper[4804]: [+]etcd ok Jan 28 11:24:36 crc kubenswrapper[4804]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 28 11:24:36 crc kubenswrapper[4804]: [+]poststarthook/generic-apiserver-start-informers ok Jan 28 11:24:36 crc kubenswrapper[4804]: [+]poststarthook/max-in-flight-filter ok Jan 28 11:24:36 crc kubenswrapper[4804]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 28 11:24:36 crc kubenswrapper[4804]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 28 11:24:36 crc kubenswrapper[4804]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 28 11:24:36 crc kubenswrapper[4804]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 28 11:24:36 crc kubenswrapper[4804]: [+]poststarthook/project.openshift.io-projectcache ok Jan 28 11:24:36 crc kubenswrapper[4804]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 28 11:24:36 crc kubenswrapper[4804]: [+]poststarthook/openshift.io-startinformers ok Jan 28 11:24:36 crc kubenswrapper[4804]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 28 11:24:36 crc kubenswrapper[4804]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 28 11:24:36 crc kubenswrapper[4804]: livez check failed Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.027640 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" podUID="65cbbd20-6185-455b-814b-7de34194ec29" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.175400 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.181682 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:36 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:36 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:36 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.181740 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.214689 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-src4s"] Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.466015 4804 generic.go:334] "Generic (PLEG): container finished" podID="6caae643-ab85-4628-bcb1-9c0ecc48c568" containerID="04c43db3e70bb20141e7892290639067d3851e183e916843eb2d0aab2b130c9a" exitCode=0 Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.466119 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b7c6" event={"ID":"6caae643-ab85-4628-bcb1-9c0ecc48c568","Type":"ContainerDied","Data":"04c43db3e70bb20141e7892290639067d3851e183e916843eb2d0aab2b130c9a"} Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.472159 4804 generic.go:334] "Generic (PLEG): container finished" podID="24e7f4b9-abfc-4b9b-929b-1288abb63cc2" containerID="9f287564e4a6e36a41c42ef4b3439552b58a68748c72b11144f6fcfbe9b02cd5" exitCode=0 Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.472283 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"24e7f4b9-abfc-4b9b-929b-1288abb63cc2","Type":"ContainerDied","Data":"9f287564e4a6e36a41c42ef4b3439552b58a68748c72b11144f6fcfbe9b02cd5"} Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.488393 4804 generic.go:334] "Generic (PLEG): container finished" podID="b641b655-0d3e-4838-8c87-fc72873f1944" containerID="38d5811043b3f5ad798e66586c4ba52ca430539e3b5096297f2d0e1b1b72ab80" exitCode=0 Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.488503 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmw4q" event={"ID":"b641b655-0d3e-4838-8c87-fc72873f1944","Type":"ContainerDied","Data":"38d5811043b3f5ad798e66586c4ba52ca430539e3b5096297f2d0e1b1b72ab80"} Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.488543 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmw4q" event={"ID":"b641b655-0d3e-4838-8c87-fc72873f1944","Type":"ContainerStarted","Data":"b9f8fd7843e0d657401a449864e7360a08eaacd9d3a996600b88abc62b6de5e9"} Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.505545 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" event={"ID":"436e3017-a787-4e60-97cd-7cc0cdd47a2d","Type":"ContainerStarted","Data":"21c407385a0e63e468749b798e82d759e0bd8cab55527e3595f2c32049181c1c"} Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.510760 4804 generic.go:334] "Generic (PLEG): container finished" podID="ac859130-1b71-4993-ab3d-66600459a32a" containerID="f56a23acdab2c28752aaf6e4dc9073f753adc48a6322ba76e58fc61f6bfbdc2f" exitCode=0 Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.510955 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4842n" event={"ID":"ac859130-1b71-4993-ab3d-66600459a32a","Type":"ContainerDied","Data":"f56a23acdab2c28752aaf6e4dc9073f753adc48a6322ba76e58fc61f6bfbdc2f"} Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.511029 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4842n" event={"ID":"ac859130-1b71-4993-ab3d-66600459a32a","Type":"ContainerStarted","Data":"035e3af2de47d3eb7c5ac704bd99c258ff7c2e427ae366507d1020ab4549c195"} Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.544556 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-nw6s2"] Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.595811 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:24:36 crc kubenswrapper[4804]: I0128 11:24:36.943167 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 28 11:24:37 crc kubenswrapper[4804]: I0128 11:24:37.180452 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:37 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:37 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:37 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:37 crc kubenswrapper[4804]: I0128 11:24:37.180516 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:37 crc kubenswrapper[4804]: I0128 11:24:37.582828 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" event={"ID":"436e3017-a787-4e60-97cd-7cc0cdd47a2d","Type":"ContainerStarted","Data":"83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191"} Jan 28 11:24:37 crc kubenswrapper[4804]: I0128 11:24:37.583665 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:37 crc kubenswrapper[4804]: I0128 11:24:37.638188 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" podStartSLOduration=131.63816137 podStartE2EDuration="2m11.63816137s" podCreationTimestamp="2026-01-28 11:22:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:37.633394734 +0000 UTC m=+153.428274718" watchObservedRunningTime="2026-01-28 11:24:37.63816137 +0000 UTC m=+153.433041344" Jan 28 11:24:37 crc kubenswrapper[4804]: I0128 11:24:37.642584 4804 generic.go:334] "Generic (PLEG): container finished" podID="759bdf85-0cca-46db-8126-fab61a8664a8" containerID="7a655166cb98b396df033464bb0153ad8a2d69479f2ec38efe53356e143f44dc" exitCode=0 Jan 28 11:24:37 crc kubenswrapper[4804]: I0128 11:24:37.642646 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nw6s2" event={"ID":"759bdf85-0cca-46db-8126-fab61a8664a8","Type":"ContainerDied","Data":"7a655166cb98b396df033464bb0153ad8a2d69479f2ec38efe53356e143f44dc"} Jan 28 11:24:37 crc kubenswrapper[4804]: I0128 11:24:37.642696 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nw6s2" event={"ID":"759bdf85-0cca-46db-8126-fab61a8664a8","Type":"ContainerStarted","Data":"8508960517ab52e83d2de6d52c76bf4bc148c42531ea9ecd0a9fb9ecc845cace"} Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.087537 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.118823 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kubelet-dir\") pod \"24e7f4b9-abfc-4b9b-929b-1288abb63cc2\" (UID: \"24e7f4b9-abfc-4b9b-929b-1288abb63cc2\") " Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.118896 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kube-api-access\") pod \"24e7f4b9-abfc-4b9b-929b-1288abb63cc2\" (UID: \"24e7f4b9-abfc-4b9b-929b-1288abb63cc2\") " Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.121714 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "24e7f4b9-abfc-4b9b-929b-1288abb63cc2" (UID: "24e7f4b9-abfc-4b9b-929b-1288abb63cc2"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.129042 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "24e7f4b9-abfc-4b9b-929b-1288abb63cc2" (UID: "24e7f4b9-abfc-4b9b-929b-1288abb63cc2"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.180039 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:38 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:38 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:38 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.180168 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.221505 4804 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.221544 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/24e7f4b9-abfc-4b9b-929b-1288abb63cc2-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.589683 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 11:24:38 crc kubenswrapper[4804]: E0128 11:24:38.589961 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e7f4b9-abfc-4b9b-929b-1288abb63cc2" containerName="pruner" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.589973 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e7f4b9-abfc-4b9b-929b-1288abb63cc2" containerName="pruner" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.590097 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e7f4b9-abfc-4b9b-929b-1288abb63cc2" containerName="pruner" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.590471 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.596287 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.596871 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.616028 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.628934 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.629016 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.676479 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"24e7f4b9-abfc-4b9b-929b-1288abb63cc2","Type":"ContainerDied","Data":"07feb2bd10defcd28c969c63a5ad2b4221cd779c52d294146538a8b53582f860"} Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.676560 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="07feb2bd10defcd28c969c63a5ad2b4221cd779c52d294146538a8b53582f860" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.677534 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.731410 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.731948 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.732070 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.771861 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 11:24:38 crc kubenswrapper[4804]: I0128 11:24:38.951775 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 11:24:39 crc kubenswrapper[4804]: I0128 11:24:39.181151 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:39 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:39 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:39 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:39 crc kubenswrapper[4804]: I0128 11:24:39.181213 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:39 crc kubenswrapper[4804]: I0128 11:24:39.507935 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 28 11:24:39 crc kubenswrapper[4804]: I0128 11:24:39.699140 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a","Type":"ContainerStarted","Data":"d129e38a5b690123ccc9fe380f07527c9a27b1434082899a97ca0ad67cfe6489"} Jan 28 11:24:40 crc kubenswrapper[4804]: I0128 11:24:40.179386 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:40 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:40 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:40 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:40 crc kubenswrapper[4804]: I0128 11:24:40.179453 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:40 crc kubenswrapper[4804]: I0128 11:24:40.721725 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a","Type":"ContainerStarted","Data":"ee523b44950b057c5d63cccbbd87eca5b1c5a29dacf9deee8a6e2ecaee6f9f0f"} Jan 28 11:24:40 crc kubenswrapper[4804]: I0128 11:24:40.740213 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=2.74017535 podStartE2EDuration="2.74017535s" podCreationTimestamp="2026-01-28 11:24:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:40.738806165 +0000 UTC m=+156.533686169" watchObservedRunningTime="2026-01-28 11:24:40.74017535 +0000 UTC m=+156.535055334" Jan 28 11:24:40 crc kubenswrapper[4804]: I0128 11:24:40.956481 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-slcp9" Jan 28 11:24:41 crc kubenswrapper[4804]: I0128 11:24:40.996146 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:41 crc kubenswrapper[4804]: I0128 11:24:41.002769 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-vbjk6" Jan 28 11:24:41 crc kubenswrapper[4804]: I0128 11:24:41.188487 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:41 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:41 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:41 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:41 crc kubenswrapper[4804]: I0128 11:24:41.188544 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:41 crc kubenswrapper[4804]: I0128 11:24:41.754522 4804 generic.go:334] "Generic (PLEG): container finished" podID="ae7433f6-40cb-4caf-8356-10bb93645af5" containerID="cc0257ab63b8ce14bac812eeb4ebcfe9baa7187c37d0e2df6e719355693b5895" exitCode=0 Jan 28 11:24:41 crc kubenswrapper[4804]: I0128 11:24:41.754595 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" event={"ID":"ae7433f6-40cb-4caf-8356-10bb93645af5","Type":"ContainerDied","Data":"cc0257ab63b8ce14bac812eeb4ebcfe9baa7187c37d0e2df6e719355693b5895"} Jan 28 11:24:41 crc kubenswrapper[4804]: I0128 11:24:41.758960 4804 generic.go:334] "Generic (PLEG): container finished" podID="4435f03e-0012-4f98-87b6-7f7dc2e0fd6a" containerID="ee523b44950b057c5d63cccbbd87eca5b1c5a29dacf9deee8a6e2ecaee6f9f0f" exitCode=0 Jan 28 11:24:41 crc kubenswrapper[4804]: I0128 11:24:41.759930 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a","Type":"ContainerDied","Data":"ee523b44950b057c5d63cccbbd87eca5b1c5a29dacf9deee8a6e2ecaee6f9f0f"} Jan 28 11:24:42 crc kubenswrapper[4804]: I0128 11:24:42.179015 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:42 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:42 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:42 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:42 crc kubenswrapper[4804]: I0128 11:24:42.179067 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:42 crc kubenswrapper[4804]: I0128 11:24:42.583242 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:24:42 crc kubenswrapper[4804]: I0128 11:24:42.584075 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:24:43 crc kubenswrapper[4804]: I0128 11:24:43.179226 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:43 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:43 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:43 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:43 crc kubenswrapper[4804]: I0128 11:24:43.179281 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:44 crc kubenswrapper[4804]: I0128 11:24:44.178746 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:44 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:44 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:44 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:44 crc kubenswrapper[4804]: I0128 11:24:44.179284 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:45 crc kubenswrapper[4804]: I0128 11:24:45.179675 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:45 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:45 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:45 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:45 crc kubenswrapper[4804]: I0128 11:24:45.179747 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:45 crc kubenswrapper[4804]: I0128 11:24:45.939023 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-cljd9" Jan 28 11:24:45 crc kubenswrapper[4804]: I0128 11:24:45.940968 4804 patch_prober.go:28] interesting pod/console-f9d7485db-xghdb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 28 11:24:45 crc kubenswrapper[4804]: I0128 11:24:45.941031 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-xghdb" podUID="bf13c867-7c3e-4845-a6c8-f25700c31666" containerName="console" probeResult="failure" output="Get \"https://10.217.0.8:8443/health\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 28 11:24:46 crc kubenswrapper[4804]: I0128 11:24:46.183063 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:46 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:46 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:46 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:46 crc kubenswrapper[4804]: I0128 11:24:46.183635 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:46 crc kubenswrapper[4804]: I0128 11:24:46.994657 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:47 crc kubenswrapper[4804]: I0128 11:24:47.001146 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/03844e8b-8d66-4cd7-aa19-51caa1407918-metrics-certs\") pod \"network-metrics-daemon-bgqd8\" (UID: \"03844e8b-8d66-4cd7-aa19-51caa1407918\") " pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:47 crc kubenswrapper[4804]: I0128 11:24:47.143438 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-bgqd8" Jan 28 11:24:47 crc kubenswrapper[4804]: I0128 11:24:47.179939 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:47 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:47 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:47 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:47 crc kubenswrapper[4804]: I0128 11:24:47.180027 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:47 crc kubenswrapper[4804]: I0128 11:24:47.866796 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-4wpb6_46da2b10-cba3-46fa-a2f3-972499966fd3/cluster-samples-operator/0.log" Jan 28 11:24:47 crc kubenswrapper[4804]: I0128 11:24:47.866847 4804 generic.go:334] "Generic (PLEG): container finished" podID="46da2b10-cba3-46fa-a2f3-972499966fd3" containerID="c52a93ee57d64c17d0c13644799fc0bc866276dac487ae35f364d3fbeb1299dd" exitCode=2 Jan 28 11:24:47 crc kubenswrapper[4804]: I0128 11:24:47.866904 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" event={"ID":"46da2b10-cba3-46fa-a2f3-972499966fd3","Type":"ContainerDied","Data":"c52a93ee57d64c17d0c13644799fc0bc866276dac487ae35f364d3fbeb1299dd"} Jan 28 11:24:47 crc kubenswrapper[4804]: I0128 11:24:47.867393 4804 scope.go:117] "RemoveContainer" containerID="c52a93ee57d64c17d0c13644799fc0bc866276dac487ae35f364d3fbeb1299dd" Jan 28 11:24:47 crc kubenswrapper[4804]: I0128 11:24:47.981830 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:47 crc kubenswrapper[4804]: I0128 11:24:47.989545 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.016306 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae7433f6-40cb-4caf-8356-10bb93645af5-config-volume\") pod \"ae7433f6-40cb-4caf-8356-10bb93645af5\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.016399 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nhdq\" (UniqueName: \"kubernetes.io/projected/ae7433f6-40cb-4caf-8356-10bb93645af5-kube-api-access-4nhdq\") pod \"ae7433f6-40cb-4caf-8356-10bb93645af5\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.017676 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ae7433f6-40cb-4caf-8356-10bb93645af5-config-volume" (OuterVolumeSpecName: "config-volume") pod "ae7433f6-40cb-4caf-8356-10bb93645af5" (UID: "ae7433f6-40cb-4caf-8356-10bb93645af5"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.044247 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae7433f6-40cb-4caf-8356-10bb93645af5-kube-api-access-4nhdq" (OuterVolumeSpecName: "kube-api-access-4nhdq") pod "ae7433f6-40cb-4caf-8356-10bb93645af5" (UID: "ae7433f6-40cb-4caf-8356-10bb93645af5"). InnerVolumeSpecName "kube-api-access-4nhdq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.118543 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae7433f6-40cb-4caf-8356-10bb93645af5-secret-volume\") pod \"ae7433f6-40cb-4caf-8356-10bb93645af5\" (UID: \"ae7433f6-40cb-4caf-8356-10bb93645af5\") " Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.119440 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kubelet-dir\") pod \"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a\" (UID: \"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a\") " Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.119512 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kube-api-access\") pod \"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a\" (UID: \"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a\") " Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.119613 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4435f03e-0012-4f98-87b6-7f7dc2e0fd6a" (UID: "4435f03e-0012-4f98-87b6-7f7dc2e0fd6a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.120117 4804 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ae7433f6-40cb-4caf-8356-10bb93645af5-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.120167 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nhdq\" (UniqueName: \"kubernetes.io/projected/ae7433f6-40cb-4caf-8356-10bb93645af5-kube-api-access-4nhdq\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.120202 4804 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.125605 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae7433f6-40cb-4caf-8356-10bb93645af5-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ae7433f6-40cb-4caf-8356-10bb93645af5" (UID: "ae7433f6-40cb-4caf-8356-10bb93645af5"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.138089 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4435f03e-0012-4f98-87b6-7f7dc2e0fd6a" (UID: "4435f03e-0012-4f98-87b6-7f7dc2e0fd6a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.185490 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:48 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:48 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:48 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.185597 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.222851 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4435f03e-0012-4f98-87b6-7f7dc2e0fd6a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.222913 4804 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ae7433f6-40cb-4caf-8356-10bb93645af5-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.873868 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"4435f03e-0012-4f98-87b6-7f7dc2e0fd6a","Type":"ContainerDied","Data":"d129e38a5b690123ccc9fe380f07527c9a27b1434082899a97ca0ad67cfe6489"} Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.873967 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d129e38a5b690123ccc9fe380f07527c9a27b1434082899a97ca0ad67cfe6489" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.873930 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.876872 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" event={"ID":"ae7433f6-40cb-4caf-8356-10bb93645af5","Type":"ContainerDied","Data":"c2ace65eb04ab5ff8b961ebdb9574c3959291d26b7237bb5bd982c03d8d46b22"} Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.876938 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c2ace65eb04ab5ff8b961ebdb9574c3959291d26b7237bb5bd982c03d8d46b22" Jan 28 11:24:48 crc kubenswrapper[4804]: I0128 11:24:48.876995 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr" Jan 28 11:24:49 crc kubenswrapper[4804]: I0128 11:24:49.178715 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:49 crc kubenswrapper[4804]: [-]has-synced failed: reason withheld Jan 28 11:24:49 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:49 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:49 crc kubenswrapper[4804]: I0128 11:24:49.178772 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:49 crc kubenswrapper[4804]: I0128 11:24:49.781480 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-bgqd8"] Jan 28 11:24:49 crc kubenswrapper[4804]: W0128 11:24:49.792256 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod03844e8b_8d66_4cd7_aa19_51caa1407918.slice/crio-080922c3dda98e098fd8ff31dc3049d092f877c246095dc1f507e09adc60e50d WatchSource:0}: Error finding container 080922c3dda98e098fd8ff31dc3049d092f877c246095dc1f507e09adc60e50d: Status 404 returned error can't find the container with id 080922c3dda98e098fd8ff31dc3049d092f877c246095dc1f507e09adc60e50d Jan 28 11:24:49 crc kubenswrapper[4804]: I0128 11:24:49.883635 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" event={"ID":"03844e8b-8d66-4cd7-aa19-51caa1407918","Type":"ContainerStarted","Data":"080922c3dda98e098fd8ff31dc3049d092f877c246095dc1f507e09adc60e50d"} Jan 28 11:24:49 crc kubenswrapper[4804]: I0128 11:24:49.886598 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-samples-operator_cluster-samples-operator-665b6dd947-4wpb6_46da2b10-cba3-46fa-a2f3-972499966fd3/cluster-samples-operator/0.log" Jan 28 11:24:49 crc kubenswrapper[4804]: I0128 11:24:49.886641 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-4wpb6" event={"ID":"46da2b10-cba3-46fa-a2f3-972499966fd3","Type":"ContainerStarted","Data":"e82503a1c0d24e0741c8abe761ce9daccd9f772e2da6578ac2d39a02c7bf1f9f"} Jan 28 11:24:50 crc kubenswrapper[4804]: I0128 11:24:50.178647 4804 patch_prober.go:28] interesting pod/router-default-5444994796-h44hn container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 28 11:24:50 crc kubenswrapper[4804]: [+]has-synced ok Jan 28 11:24:50 crc kubenswrapper[4804]: [+]process-running ok Jan 28 11:24:50 crc kubenswrapper[4804]: healthz check failed Jan 28 11:24:50 crc kubenswrapper[4804]: I0128 11:24:50.179142 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-h44hn" podUID="cf33f13a-5328-47e6-8e14-1c0a84927117" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 28 11:24:50 crc kubenswrapper[4804]: I0128 11:24:50.388779 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4j56"] Jan 28 11:24:50 crc kubenswrapper[4804]: I0128 11:24:50.389121 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" podUID="c802cb06-d5ee-489c-aa2d-4dee5f3f2557" containerName="controller-manager" containerID="cri-o://fe1f9a02caf21153db1bd567b5a8dd3b8d2a57a944d9bcf293a808b20081940f" gracePeriod=30 Jan 28 11:24:50 crc kubenswrapper[4804]: I0128 11:24:50.417142 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f"] Jan 28 11:24:50 crc kubenswrapper[4804]: I0128 11:24:50.417360 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" podUID="d56b6530-c7d7-432d-bd5e-1a07a2d94515" containerName="route-controller-manager" containerID="cri-o://e29698ab75e7e02a47e44e9099d1296207853f543f444cd2dc63a10873278dc8" gracePeriod=30 Jan 28 11:24:50 crc kubenswrapper[4804]: I0128 11:24:50.894380 4804 generic.go:334] "Generic (PLEG): container finished" podID="d56b6530-c7d7-432d-bd5e-1a07a2d94515" containerID="e29698ab75e7e02a47e44e9099d1296207853f543f444cd2dc63a10873278dc8" exitCode=0 Jan 28 11:24:50 crc kubenswrapper[4804]: I0128 11:24:50.894420 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" event={"ID":"d56b6530-c7d7-432d-bd5e-1a07a2d94515","Type":"ContainerDied","Data":"e29698ab75e7e02a47e44e9099d1296207853f543f444cd2dc63a10873278dc8"} Jan 28 11:24:50 crc kubenswrapper[4804]: I0128 11:24:50.896444 4804 generic.go:334] "Generic (PLEG): container finished" podID="c802cb06-d5ee-489c-aa2d-4dee5f3f2557" containerID="fe1f9a02caf21153db1bd567b5a8dd3b8d2a57a944d9bcf293a808b20081940f" exitCode=0 Jan 28 11:24:50 crc kubenswrapper[4804]: I0128 11:24:50.896518 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" event={"ID":"c802cb06-d5ee-489c-aa2d-4dee5f3f2557","Type":"ContainerDied","Data":"fe1f9a02caf21153db1bd567b5a8dd3b8d2a57a944d9bcf293a808b20081940f"} Jan 28 11:24:50 crc kubenswrapper[4804]: I0128 11:24:50.898695 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" event={"ID":"03844e8b-8d66-4cd7-aa19-51caa1407918","Type":"ContainerStarted","Data":"be5253521d5e6ad770d8cc9a163638f1cea5c7460a707c026d28b5f6c44e6418"} Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.182292 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.186230 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-h44hn" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.408551 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.479227 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-5dc588d788-64sh7"] Jan 28 11:24:51 crc kubenswrapper[4804]: E0128 11:24:51.479580 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4435f03e-0012-4f98-87b6-7f7dc2e0fd6a" containerName="pruner" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.479597 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="4435f03e-0012-4f98-87b6-7f7dc2e0fd6a" containerName="pruner" Jan 28 11:24:51 crc kubenswrapper[4804]: E0128 11:24:51.479615 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae7433f6-40cb-4caf-8356-10bb93645af5" containerName="collect-profiles" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.479623 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae7433f6-40cb-4caf-8356-10bb93645af5" containerName="collect-profiles" Jan 28 11:24:51 crc kubenswrapper[4804]: E0128 11:24:51.479634 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c802cb06-d5ee-489c-aa2d-4dee5f3f2557" containerName="controller-manager" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.479642 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="c802cb06-d5ee-489c-aa2d-4dee5f3f2557" containerName="controller-manager" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.479744 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="c802cb06-d5ee-489c-aa2d-4dee5f3f2557" containerName="controller-manager" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.479759 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae7433f6-40cb-4caf-8356-10bb93645af5" containerName="collect-profiles" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.479771 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="4435f03e-0012-4f98-87b6-7f7dc2e0fd6a" containerName="pruner" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.480299 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.485019 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpcsj\" (UniqueName: \"kubernetes.io/projected/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-kube-api-access-dpcsj\") pod \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.485111 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-client-ca\") pod \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.485180 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-config\") pod \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.485267 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-serving-cert\") pod \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.485313 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-proxy-ca-bundles\") pod \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\" (UID: \"c802cb06-d5ee-489c-aa2d-4dee5f3f2557\") " Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.486458 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-client-ca" (OuterVolumeSpecName: "client-ca") pod "c802cb06-d5ee-489c-aa2d-4dee5f3f2557" (UID: "c802cb06-d5ee-489c-aa2d-4dee5f3f2557"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.486471 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c802cb06-d5ee-489c-aa2d-4dee5f3f2557" (UID: "c802cb06-d5ee-489c-aa2d-4dee5f3f2557"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.486744 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-config" (OuterVolumeSpecName: "config") pod "c802cb06-d5ee-489c-aa2d-4dee5f3f2557" (UID: "c802cb06-d5ee-489c-aa2d-4dee5f3f2557"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.493326 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c802cb06-d5ee-489c-aa2d-4dee5f3f2557" (UID: "c802cb06-d5ee-489c-aa2d-4dee5f3f2557"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.493407 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-kube-api-access-dpcsj" (OuterVolumeSpecName: "kube-api-access-dpcsj") pod "c802cb06-d5ee-489c-aa2d-4dee5f3f2557" (UID: "c802cb06-d5ee-489c-aa2d-4dee5f3f2557"). InnerVolumeSpecName "kube-api-access-dpcsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.503073 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5dc588d788-64sh7"] Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.587041 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-client-ca\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.587108 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-config\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.587144 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfmsd\" (UniqueName: \"kubernetes.io/projected/59930ea0-7a62-4dd0-a48d-0246b34a6be7-kube-api-access-kfmsd\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.587176 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59930ea0-7a62-4dd0-a48d-0246b34a6be7-serving-cert\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.587252 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-proxy-ca-bundles\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.587394 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.587412 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.587426 4804 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.587442 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpcsj\" (UniqueName: \"kubernetes.io/projected/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-kube-api-access-dpcsj\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.587455 4804 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c802cb06-d5ee-489c-aa2d-4dee5f3f2557-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.689161 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-proxy-ca-bundles\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.689251 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-client-ca\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.689271 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-config\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.689293 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kfmsd\" (UniqueName: \"kubernetes.io/projected/59930ea0-7a62-4dd0-a48d-0246b34a6be7-kube-api-access-kfmsd\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.689317 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59930ea0-7a62-4dd0-a48d-0246b34a6be7-serving-cert\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.690893 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-client-ca\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.691192 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-proxy-ca-bundles\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.695103 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59930ea0-7a62-4dd0-a48d-0246b34a6be7-serving-cert\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.747968 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-config\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.748494 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfmsd\" (UniqueName: \"kubernetes.io/projected/59930ea0-7a62-4dd0-a48d-0246b34a6be7-kube-api-access-kfmsd\") pod \"controller-manager-5dc588d788-64sh7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.786814 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.825480 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.890920 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-config\") pod \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.891437 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d56b6530-c7d7-432d-bd5e-1a07a2d94515-serving-cert\") pod \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.891470 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74dlk\" (UniqueName: \"kubernetes.io/projected/d56b6530-c7d7-432d-bd5e-1a07a2d94515-kube-api-access-74dlk\") pod \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.891509 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-client-ca\") pod \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\" (UID: \"d56b6530-c7d7-432d-bd5e-1a07a2d94515\") " Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.892289 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-config" (OuterVolumeSpecName: "config") pod "d56b6530-c7d7-432d-bd5e-1a07a2d94515" (UID: "d56b6530-c7d7-432d-bd5e-1a07a2d94515"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.892568 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-client-ca" (OuterVolumeSpecName: "client-ca") pod "d56b6530-c7d7-432d-bd5e-1a07a2d94515" (UID: "d56b6530-c7d7-432d-bd5e-1a07a2d94515"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.897622 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d56b6530-c7d7-432d-bd5e-1a07a2d94515-kube-api-access-74dlk" (OuterVolumeSpecName: "kube-api-access-74dlk") pod "d56b6530-c7d7-432d-bd5e-1a07a2d94515" (UID: "d56b6530-c7d7-432d-bd5e-1a07a2d94515"). InnerVolumeSpecName "kube-api-access-74dlk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.902229 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d56b6530-c7d7-432d-bd5e-1a07a2d94515-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d56b6530-c7d7-432d-bd5e-1a07a2d94515" (UID: "d56b6530-c7d7-432d-bd5e-1a07a2d94515"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.914417 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-bgqd8" event={"ID":"03844e8b-8d66-4cd7-aa19-51caa1407918","Type":"ContainerStarted","Data":"4e1665d2643f9e7843913de938e6efabf84334825d7e735b2ad99d81bececd70"} Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.919233 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" event={"ID":"d56b6530-c7d7-432d-bd5e-1a07a2d94515","Type":"ContainerDied","Data":"5fd5c872cb044c160a4ff18aa2a4c6121bf64074f3146f4655562dbf6f1c2b4e"} Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.919306 4804 scope.go:117] "RemoveContainer" containerID="e29698ab75e7e02a47e44e9099d1296207853f543f444cd2dc63a10873278dc8" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.919470 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.924104 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" event={"ID":"c802cb06-d5ee-489c-aa2d-4dee5f3f2557","Type":"ContainerDied","Data":"cffe7deccba04a98fba8c431ccb78fb720efb5536fc80dba3180146f85a85987"} Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.924352 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-z4j56" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.953515 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-bgqd8" podStartSLOduration=146.953480712 podStartE2EDuration="2m26.953480712s" podCreationTimestamp="2026-01-28 11:22:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:51.941488129 +0000 UTC m=+167.736368123" watchObservedRunningTime="2026-01-28 11:24:51.953480712 +0000 UTC m=+167.748360716" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.957497 4804 scope.go:117] "RemoveContainer" containerID="fe1f9a02caf21153db1bd567b5a8dd3b8d2a57a944d9bcf293a808b20081940f" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.973968 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4j56"] Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.977324 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-z4j56"] Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.995715 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d56b6530-c7d7-432d-bd5e-1a07a2d94515-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.995753 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-74dlk\" (UniqueName: \"kubernetes.io/projected/d56b6530-c7d7-432d-bd5e-1a07a2d94515-kube-api-access-74dlk\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.995763 4804 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:51 crc kubenswrapper[4804]: I0128 11:24:51.995775 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d56b6530-c7d7-432d-bd5e-1a07a2d94515-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.008577 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f"] Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.014335 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-wg94f"] Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.039932 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5dc588d788-64sh7"] Jan 28 11:24:52 crc kubenswrapper[4804]: W0128 11:24:52.049816 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod59930ea0_7a62_4dd0_a48d_0246b34a6be7.slice/crio-a60f1a1c976bce9f2b4c14b42cba1fdca7b0b73edf690555b2df9dd2467d95b6 WatchSource:0}: Error finding container a60f1a1c976bce9f2b4c14b42cba1fdca7b0b73edf690555b2df9dd2467d95b6: Status 404 returned error can't find the container with id a60f1a1c976bce9f2b4c14b42cba1fdca7b0b73edf690555b2df9dd2467d95b6 Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.289714 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk"] Jan 28 11:24:52 crc kubenswrapper[4804]: E0128 11:24:52.290221 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d56b6530-c7d7-432d-bd5e-1a07a2d94515" containerName="route-controller-manager" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.290307 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d56b6530-c7d7-432d-bd5e-1a07a2d94515" containerName="route-controller-manager" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.290504 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="d56b6530-c7d7-432d-bd5e-1a07a2d94515" containerName="route-controller-manager" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.291050 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.296777 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.297285 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.297623 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.300025 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.300594 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.301381 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.305547 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk"] Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.403332 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-config\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.403419 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-client-ca\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.403518 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-serving-cert\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.403595 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7f7mv\" (UniqueName: \"kubernetes.io/projected/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-kube-api-access-7f7mv\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.505011 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-config\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.505315 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-client-ca\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.505425 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-serving-cert\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.505556 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7f7mv\" (UniqueName: \"kubernetes.io/projected/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-kube-api-access-7f7mv\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.506837 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-client-ca\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.508433 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-config\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.518224 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-serving-cert\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.542502 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7f7mv\" (UniqueName: \"kubernetes.io/projected/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-kube-api-access-7f7mv\") pod \"route-controller-manager-564dc4567b-ss5tk\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.611475 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.924814 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c802cb06-d5ee-489c-aa2d-4dee5f3f2557" path="/var/lib/kubelet/pods/c802cb06-d5ee-489c-aa2d-4dee5f3f2557/volumes" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.925562 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d56b6530-c7d7-432d-bd5e-1a07a2d94515" path="/var/lib/kubelet/pods/d56b6530-c7d7-432d-bd5e-1a07a2d94515/volumes" Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.936331 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" event={"ID":"59930ea0-7a62-4dd0-a48d-0246b34a6be7","Type":"ContainerStarted","Data":"06586f4a68116bfb55e6101b637d70140d785a6280b61324bd015de3dcd7bb58"} Jan 28 11:24:52 crc kubenswrapper[4804]: I0128 11:24:52.936446 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" event={"ID":"59930ea0-7a62-4dd0-a48d-0246b34a6be7","Type":"ContainerStarted","Data":"a60f1a1c976bce9f2b4c14b42cba1fdca7b0b73edf690555b2df9dd2467d95b6"} Jan 28 11:24:53 crc kubenswrapper[4804]: I0128 11:24:53.110029 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk"] Jan 28 11:24:53 crc kubenswrapper[4804]: I0128 11:24:53.960206 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" event={"ID":"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da","Type":"ContainerStarted","Data":"9a6e2ae8d943be72371e7e8be26fa1c549f7f11bca18c47e56000f334ca6ec2d"} Jan 28 11:24:53 crc kubenswrapper[4804]: I0128 11:24:53.960257 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" event={"ID":"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da","Type":"ContainerStarted","Data":"619586aca8589d38b78a3357b12c57e55e945004febd99d5969acb6d2850fa1c"} Jan 28 11:24:53 crc kubenswrapper[4804]: I0128 11:24:53.981407 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" podStartSLOduration=3.981387577 podStartE2EDuration="3.981387577s" podCreationTimestamp="2026-01-28 11:24:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:53.980923572 +0000 UTC m=+169.775803556" watchObservedRunningTime="2026-01-28 11:24:53.981387577 +0000 UTC m=+169.776267561" Jan 28 11:24:54 crc kubenswrapper[4804]: I0128 11:24:54.966644 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:54 crc kubenswrapper[4804]: I0128 11:24:54.973776 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:24:54 crc kubenswrapper[4804]: I0128 11:24:54.992450 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" podStartSLOduration=4.992427555 podStartE2EDuration="4.992427555s" podCreationTimestamp="2026-01-28 11:24:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:24:54.989046674 +0000 UTC m=+170.783926658" watchObservedRunningTime="2026-01-28 11:24:54.992427555 +0000 UTC m=+170.787307539" Jan 28 11:24:55 crc kubenswrapper[4804]: I0128 11:24:55.717732 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:24:55 crc kubenswrapper[4804]: I0128 11:24:55.944649 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:24:55 crc kubenswrapper[4804]: I0128 11:24:55.949011 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:25:01 crc kubenswrapper[4804]: I0128 11:25:01.826566 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:25:01 crc kubenswrapper[4804]: I0128 11:25:01.833094 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:25:05 crc kubenswrapper[4804]: I0128 11:25:05.701069 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-47d82" Jan 28 11:25:08 crc kubenswrapper[4804]: E0128 11:25:08.860046 4804 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 11:25:08 crc kubenswrapper[4804]: E0128 11:25:08.860634 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l6sc7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-48gg7_openshift-marketplace(23f32834-88e4-454d-81fe-6370a2bc8e0b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 11:25:08 crc kubenswrapper[4804]: E0128 11:25:08.862526 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-48gg7" podUID="23f32834-88e4-454d-81fe-6370a2bc8e0b" Jan 28 11:25:10 crc kubenswrapper[4804]: I0128 11:25:10.328207 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5dc588d788-64sh7"] Jan 28 11:25:10 crc kubenswrapper[4804]: I0128 11:25:10.328840 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" podUID="59930ea0-7a62-4dd0-a48d-0246b34a6be7" containerName="controller-manager" containerID="cri-o://06586f4a68116bfb55e6101b637d70140d785a6280b61324bd015de3dcd7bb58" gracePeriod=30 Jan 28 11:25:10 crc kubenswrapper[4804]: I0128 11:25:10.435981 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk"] Jan 28 11:25:10 crc kubenswrapper[4804]: I0128 11:25:10.436193 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" podUID="0c7badbc-1ec7-4d3b-b7da-c5ace0b243da" containerName="route-controller-manager" containerID="cri-o://9a6e2ae8d943be72371e7e8be26fa1c549f7f11bca18c47e56000f334ca6ec2d" gracePeriod=30 Jan 28 11:25:11 crc kubenswrapper[4804]: I0128 11:25:11.069418 4804 generic.go:334] "Generic (PLEG): container finished" podID="0c7badbc-1ec7-4d3b-b7da-c5ace0b243da" containerID="9a6e2ae8d943be72371e7e8be26fa1c549f7f11bca18c47e56000f334ca6ec2d" exitCode=0 Jan 28 11:25:11 crc kubenswrapper[4804]: I0128 11:25:11.069554 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" event={"ID":"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da","Type":"ContainerDied","Data":"9a6e2ae8d943be72371e7e8be26fa1c549f7f11bca18c47e56000f334ca6ec2d"} Jan 28 11:25:11 crc kubenswrapper[4804]: I0128 11:25:11.073495 4804 generic.go:334] "Generic (PLEG): container finished" podID="59930ea0-7a62-4dd0-a48d-0246b34a6be7" containerID="06586f4a68116bfb55e6101b637d70140d785a6280b61324bd015de3dcd7bb58" exitCode=0 Jan 28 11:25:11 crc kubenswrapper[4804]: I0128 11:25:11.074041 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" event={"ID":"59930ea0-7a62-4dd0-a48d-0246b34a6be7","Type":"ContainerDied","Data":"06586f4a68116bfb55e6101b637d70140d785a6280b61324bd015de3dcd7bb58"} Jan 28 11:25:11 crc kubenswrapper[4804]: I0128 11:25:11.827849 4804 patch_prober.go:28] interesting pod/controller-manager-5dc588d788-64sh7 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" start-of-body= Jan 28 11:25:11 crc kubenswrapper[4804]: I0128 11:25:11.827996 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" podUID="59930ea0-7a62-4dd0-a48d-0246b34a6be7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.54:8443/healthz\": dial tcp 10.217.0.54:8443: connect: connection refused" Jan 28 11:25:12 crc kubenswrapper[4804]: I0128 11:25:12.172284 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 28 11:25:12 crc kubenswrapper[4804]: I0128 11:25:12.582661 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:25:12 crc kubenswrapper[4804]: I0128 11:25:12.582744 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:25:12 crc kubenswrapper[4804]: I0128 11:25:12.612394 4804 patch_prober.go:28] interesting pod/route-controller-manager-564dc4567b-ss5tk container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" start-of-body= Jan 28 11:25:12 crc kubenswrapper[4804]: I0128 11:25:12.612783 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" podUID="0c7badbc-1ec7-4d3b-b7da-c5ace0b243da" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.55:8443/healthz\": dial tcp 10.217.0.55:8443: connect: connection refused" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.750118 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-48gg7" podUID="23f32834-88e4-454d-81fe-6370a2bc8e0b" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.860764 4804 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.860929 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9n2zg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-nw6s2_openshift-marketplace(759bdf85-0cca-46db-8126-fab61a8664a8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.862054 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-nw6s2" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.863544 4804 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.863662 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4gn5v,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-4842n_openshift-marketplace(ac859130-1b71-4993-ab3d-66600459a32a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.864795 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-4842n" podUID="ac859130-1b71-4993-ab3d-66600459a32a" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.896107 4804 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.896268 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dklnt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-jmw4q_openshift-marketplace(b641b655-0d3e-4838-8c87-fc72873f1944): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.896316 4804 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.896383 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wms6l,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-gw5tb_openshift-marketplace(8a0ef2f6-3113-478c-bb8c-9ea8e004a27d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.897417 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-jmw4q" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" Jan 28 11:25:13 crc kubenswrapper[4804]: E0128 11:25:13.897523 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-gw5tb" podUID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.126601 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzmvb" event={"ID":"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d","Type":"ContainerStarted","Data":"ec196e8414d1104384ba418ed46e3931a8aa99482add9614aaedd0533c6a0b63"} Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.135997 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvdtx" event={"ID":"4ad471e3-4346-4464-94bf-778299801fe4","Type":"ContainerStarted","Data":"97664e8d1984615a65d446a8bc46d2bb67c1945d32796a3d76d2a216b0e0b130"} Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.139500 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b7c6" event={"ID":"6caae643-ab85-4628-bcb1-9c0ecc48c568","Type":"ContainerStarted","Data":"d94669774e7242d8b7fe429cfa0919b0f629e2465d0eed385a4b1380750d4b02"} Jan 28 11:25:14 crc kubenswrapper[4804]: E0128 11:25:14.141357 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-jmw4q" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" Jan 28 11:25:14 crc kubenswrapper[4804]: E0128 11:25:14.141462 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-gw5tb" podUID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" Jan 28 11:25:14 crc kubenswrapper[4804]: E0128 11:25:14.141680 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-nw6s2" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" Jan 28 11:25:14 crc kubenswrapper[4804]: E0128 11:25:14.142983 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-4842n" podUID="ac859130-1b71-4993-ab3d-66600459a32a" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.192561 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.193727 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.201398 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.201561 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.219202 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.263391 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.359375 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7f7mv\" (UniqueName: \"kubernetes.io/projected/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-kube-api-access-7f7mv\") pod \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.359536 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-serving-cert\") pod \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.359567 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-config\") pod \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.359636 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-client-ca\") pod \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\" (UID: \"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da\") " Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.359697 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg"] Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.359861 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/915725ae-1097-4499-a143-bc1355edd31b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"915725ae-1097-4499-a143-bc1355edd31b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.359935 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/915725ae-1097-4499-a143-bc1355edd31b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"915725ae-1097-4499-a143-bc1355edd31b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 11:25:14 crc kubenswrapper[4804]: E0128 11:25:14.360120 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0c7badbc-1ec7-4d3b-b7da-c5ace0b243da" containerName="route-controller-manager" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.360150 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="0c7badbc-1ec7-4d3b-b7da-c5ace0b243da" containerName="route-controller-manager" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.360300 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="0c7badbc-1ec7-4d3b-b7da-c5ace0b243da" containerName="route-controller-manager" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.360759 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.361633 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-config" (OuterVolumeSpecName: "config") pod "0c7badbc-1ec7-4d3b-b7da-c5ace0b243da" (UID: "0c7badbc-1ec7-4d3b-b7da-c5ace0b243da"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.361740 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-client-ca" (OuterVolumeSpecName: "client-ca") pod "0c7badbc-1ec7-4d3b-b7da-c5ace0b243da" (UID: "0c7badbc-1ec7-4d3b-b7da-c5ace0b243da"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.369280 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0c7badbc-1ec7-4d3b-b7da-c5ace0b243da" (UID: "0c7badbc-1ec7-4d3b-b7da-c5ace0b243da"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.373178 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-kube-api-access-7f7mv" (OuterVolumeSpecName: "kube-api-access-7f7mv") pod "0c7badbc-1ec7-4d3b-b7da-c5ace0b243da" (UID: "0c7badbc-1ec7-4d3b-b7da-c5ace0b243da"). InnerVolumeSpecName "kube-api-access-7f7mv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.373430 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg"] Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.461171 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-client-ca\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.461849 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/915725ae-1097-4499-a143-bc1355edd31b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"915725ae-1097-4499-a143-bc1355edd31b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.461912 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-serving-cert\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.461971 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/915725ae-1097-4499-a143-bc1355edd31b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"915725ae-1097-4499-a143-bc1355edd31b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.461995 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/915725ae-1097-4499-a143-bc1355edd31b-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"915725ae-1097-4499-a143-bc1355edd31b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.462023 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vb8g\" (UniqueName: \"kubernetes.io/projected/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-kube-api-access-8vb8g\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.462241 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-config\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.462377 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.462392 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.462403 4804 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.462415 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7f7mv\" (UniqueName: \"kubernetes.io/projected/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da-kube-api-access-7f7mv\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.479064 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/915725ae-1097-4499-a143-bc1355edd31b-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"915725ae-1097-4499-a143-bc1355edd31b\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.549517 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.563192 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.563359 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-serving-cert\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.563459 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vb8g\" (UniqueName: \"kubernetes.io/projected/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-kube-api-access-8vb8g\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.563501 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-config\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.563543 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-client-ca\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.564779 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-config\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.565359 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-client-ca\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.570706 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-serving-cert\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.587408 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vb8g\" (UniqueName: \"kubernetes.io/projected/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-kube-api-access-8vb8g\") pod \"route-controller-manager-c549dd98f-pm2jg\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.664621 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-proxy-ca-bundles\") pod \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.665101 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-config\") pod \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.665166 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-client-ca\") pod \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.665261 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59930ea0-7a62-4dd0-a48d-0246b34a6be7-serving-cert\") pod \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.665288 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfmsd\" (UniqueName: \"kubernetes.io/projected/59930ea0-7a62-4dd0-a48d-0246b34a6be7-kube-api-access-kfmsd\") pod \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\" (UID: \"59930ea0-7a62-4dd0-a48d-0246b34a6be7\") " Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.665745 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-client-ca" (OuterVolumeSpecName: "client-ca") pod "59930ea0-7a62-4dd0-a48d-0246b34a6be7" (UID: "59930ea0-7a62-4dd0-a48d-0246b34a6be7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.665761 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "59930ea0-7a62-4dd0-a48d-0246b34a6be7" (UID: "59930ea0-7a62-4dd0-a48d-0246b34a6be7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.665838 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-config" (OuterVolumeSpecName: "config") pod "59930ea0-7a62-4dd0-a48d-0246b34a6be7" (UID: "59930ea0-7a62-4dd0-a48d-0246b34a6be7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.666426 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.666455 4804 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.666467 4804 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/59930ea0-7a62-4dd0-a48d-0246b34a6be7-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.672046 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/59930ea0-7a62-4dd0-a48d-0246b34a6be7-kube-api-access-kfmsd" (OuterVolumeSpecName: "kube-api-access-kfmsd") pod "59930ea0-7a62-4dd0-a48d-0246b34a6be7" (UID: "59930ea0-7a62-4dd0-a48d-0246b34a6be7"). InnerVolumeSpecName "kube-api-access-kfmsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.672182 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/59930ea0-7a62-4dd0-a48d-0246b34a6be7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "59930ea0-7a62-4dd0-a48d-0246b34a6be7" (UID: "59930ea0-7a62-4dd0-a48d-0246b34a6be7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.674487 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.746125 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.767973 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/59930ea0-7a62-4dd0-a48d-0246b34a6be7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.768357 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfmsd\" (UniqueName: \"kubernetes.io/projected/59930ea0-7a62-4dd0-a48d-0246b34a6be7-kube-api-access-kfmsd\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:14 crc kubenswrapper[4804]: I0128 11:25:14.879253 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg"] Jan 28 11:25:14 crc kubenswrapper[4804]: W0128 11:25:14.891043 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75052041_d7ef_4a05_ac6d_fdbf2f8e2ab9.slice/crio-9aadef1b32585caba0ba9cf8ffb717fece86708962396590bf0f0d48c279556e WatchSource:0}: Error finding container 9aadef1b32585caba0ba9cf8ffb717fece86708962396590bf0f0d48c279556e: Status 404 returned error can't find the container with id 9aadef1b32585caba0ba9cf8ffb717fece86708962396590bf0f0d48c279556e Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.143182 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" event={"ID":"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9","Type":"ContainerStarted","Data":"0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b"} Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.143230 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" event={"ID":"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9","Type":"ContainerStarted","Data":"9aadef1b32585caba0ba9cf8ffb717fece86708962396590bf0f0d48c279556e"} Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.143403 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.150612 4804 generic.go:334] "Generic (PLEG): container finished" podID="6caae643-ab85-4628-bcb1-9c0ecc48c568" containerID="d94669774e7242d8b7fe429cfa0919b0f629e2465d0eed385a4b1380750d4b02" exitCode=0 Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.150678 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b7c6" event={"ID":"6caae643-ab85-4628-bcb1-9c0ecc48c568","Type":"ContainerDied","Data":"d94669774e7242d8b7fe429cfa0919b0f629e2465d0eed385a4b1380750d4b02"} Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.168582 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" event={"ID":"0c7badbc-1ec7-4d3b-b7da-c5ace0b243da","Type":"ContainerDied","Data":"619586aca8589d38b78a3357b12c57e55e945004febd99d5969acb6d2850fa1c"} Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.168656 4804 scope.go:117] "RemoveContainer" containerID="9a6e2ae8d943be72371e7e8be26fa1c549f7f11bca18c47e56000f334ca6ec2d" Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.168825 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk" Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.178833 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"915725ae-1097-4499-a143-bc1355edd31b","Type":"ContainerStarted","Data":"0772f280b6e0f7805b47c8bf2aacad07b11de9064aaab7c0ccf82cfc5b95b407"} Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.178906 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"915725ae-1097-4499-a143-bc1355edd31b","Type":"ContainerStarted","Data":"5c0b0319c179b4a20958089ee62da80f519137ecc33dda002ad8302642312986"} Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.179233 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" podStartSLOduration=5.179184755 podStartE2EDuration="5.179184755s" podCreationTimestamp="2026-01-28 11:25:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:25:15.169302962 +0000 UTC m=+190.964182956" watchObservedRunningTime="2026-01-28 11:25:15.179184755 +0000 UTC m=+190.974064739" Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.185432 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" event={"ID":"59930ea0-7a62-4dd0-a48d-0246b34a6be7","Type":"ContainerDied","Data":"a60f1a1c976bce9f2b4c14b42cba1fdca7b0b73edf690555b2df9dd2467d95b6"} Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.185758 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5dc588d788-64sh7" Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.187370 4804 generic.go:334] "Generic (PLEG): container finished" podID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" containerID="ec196e8414d1104384ba418ed46e3931a8aa99482add9614aaedd0533c6a0b63" exitCode=0 Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.187479 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzmvb" event={"ID":"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d","Type":"ContainerDied","Data":"ec196e8414d1104384ba418ed46e3931a8aa99482add9614aaedd0533c6a0b63"} Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.195112 4804 generic.go:334] "Generic (PLEG): container finished" podID="4ad471e3-4346-4464-94bf-778299801fe4" containerID="97664e8d1984615a65d446a8bc46d2bb67c1945d32796a3d76d2a216b0e0b130" exitCode=0 Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.195163 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvdtx" event={"ID":"4ad471e3-4346-4464-94bf-778299801fe4","Type":"ContainerDied","Data":"97664e8d1984615a65d446a8bc46d2bb67c1945d32796a3d76d2a216b0e0b130"} Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.207374 4804 scope.go:117] "RemoveContainer" containerID="06586f4a68116bfb55e6101b637d70140d785a6280b61324bd015de3dcd7bb58" Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.233062 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk"] Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.235959 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-564dc4567b-ss5tk"] Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.264384 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=1.264361907 podStartE2EDuration="1.264361907s" podCreationTimestamp="2026-01-28 11:25:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:25:15.242383087 +0000 UTC m=+191.037263071" watchObservedRunningTime="2026-01-28 11:25:15.264361907 +0000 UTC m=+191.059241891" Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.278109 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-5dc588d788-64sh7"] Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.280764 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-5dc588d788-64sh7"] Jan 28 11:25:15 crc kubenswrapper[4804]: I0128 11:25:15.566965 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:16 crc kubenswrapper[4804]: I0128 11:25:16.206253 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzmvb" event={"ID":"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d","Type":"ContainerStarted","Data":"42f2eeb16ac98652bf013b7ae171fa09175e007ba579b10ded8267ad8190a2a1"} Jan 28 11:25:16 crc kubenswrapper[4804]: I0128 11:25:16.208874 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvdtx" event={"ID":"4ad471e3-4346-4464-94bf-778299801fe4","Type":"ContainerStarted","Data":"c68ce6087a718ae99e3cc4463f7573abafe2ce84992961e303b7908a2d114381"} Jan 28 11:25:16 crc kubenswrapper[4804]: I0128 11:25:16.214604 4804 generic.go:334] "Generic (PLEG): container finished" podID="915725ae-1097-4499-a143-bc1355edd31b" containerID="0772f280b6e0f7805b47c8bf2aacad07b11de9064aaab7c0ccf82cfc5b95b407" exitCode=0 Jan 28 11:25:16 crc kubenswrapper[4804]: I0128 11:25:16.214705 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"915725ae-1097-4499-a143-bc1355edd31b","Type":"ContainerDied","Data":"0772f280b6e0f7805b47c8bf2aacad07b11de9064aaab7c0ccf82cfc5b95b407"} Jan 28 11:25:16 crc kubenswrapper[4804]: I0128 11:25:16.229508 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hzmvb" podStartSLOduration=4.018623509 podStartE2EDuration="44.22948729s" podCreationTimestamp="2026-01-28 11:24:32 +0000 UTC" firstStartedPulling="2026-01-28 11:24:35.429120488 +0000 UTC m=+151.224000472" lastFinishedPulling="2026-01-28 11:25:15.639984279 +0000 UTC m=+191.434864253" observedRunningTime="2026-01-28 11:25:16.226633666 +0000 UTC m=+192.021513660" watchObservedRunningTime="2026-01-28 11:25:16.22948729 +0000 UTC m=+192.024367274" Jan 28 11:25:16 crc kubenswrapper[4804]: I0128 11:25:16.261239 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-kvdtx" podStartSLOduration=4.033821828 podStartE2EDuration="44.26121622s" podCreationTimestamp="2026-01-28 11:24:32 +0000 UTC" firstStartedPulling="2026-01-28 11:24:35.428787307 +0000 UTC m=+151.223667291" lastFinishedPulling="2026-01-28 11:25:15.656181699 +0000 UTC m=+191.451061683" observedRunningTime="2026-01-28 11:25:16.258197961 +0000 UTC m=+192.053077945" watchObservedRunningTime="2026-01-28 11:25:16.26121622 +0000 UTC m=+192.056096204" Jan 28 11:25:16 crc kubenswrapper[4804]: I0128 11:25:16.930272 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0c7badbc-1ec7-4d3b-b7da-c5ace0b243da" path="/var/lib/kubelet/pods/0c7badbc-1ec7-4d3b-b7da-c5ace0b243da/volumes" Jan 28 11:25:16 crc kubenswrapper[4804]: I0128 11:25:16.931415 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="59930ea0-7a62-4dd0-a48d-0246b34a6be7" path="/var/lib/kubelet/pods/59930ea0-7a62-4dd0-a48d-0246b34a6be7/volumes" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.226066 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b7c6" event={"ID":"6caae643-ab85-4628-bcb1-9c0ecc48c568","Type":"ContainerStarted","Data":"3fd567f1f3948b02442e51054aa407e5b7de7526347804594426ab16143ecc19"} Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.243817 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9b7c6" podStartSLOduration=3.579145715 podStartE2EDuration="43.243795514s" podCreationTimestamp="2026-01-28 11:24:34 +0000 UTC" firstStartedPulling="2026-01-28 11:24:36.468722011 +0000 UTC m=+152.263601985" lastFinishedPulling="2026-01-28 11:25:16.1333718 +0000 UTC m=+191.928251784" observedRunningTime="2026-01-28 11:25:17.241005633 +0000 UTC m=+193.035885617" watchObservedRunningTime="2026-01-28 11:25:17.243795514 +0000 UTC m=+193.038675498" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.309904 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-74695d59-ptx55"] Jan 28 11:25:17 crc kubenswrapper[4804]: E0128 11:25:17.310562 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="59930ea0-7a62-4dd0-a48d-0246b34a6be7" containerName="controller-manager" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.310581 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="59930ea0-7a62-4dd0-a48d-0246b34a6be7" containerName="controller-manager" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.310714 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="59930ea0-7a62-4dd0-a48d-0246b34a6be7" containerName="controller-manager" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.312558 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.317137 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.318823 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.319052 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.319228 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.319412 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.319662 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.321389 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74695d59-ptx55"] Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.328144 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.415263 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-serving-cert\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.415311 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpg9v\" (UniqueName: \"kubernetes.io/projected/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-kube-api-access-jpg9v\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.415339 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-client-ca\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.415395 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-config\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.415410 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-proxy-ca-bundles\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.516347 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-serving-cert\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.516391 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jpg9v\" (UniqueName: \"kubernetes.io/projected/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-kube-api-access-jpg9v\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.516419 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-client-ca\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.516475 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-config\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.516491 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-proxy-ca-bundles\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.518303 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-client-ca\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.518875 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-config\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.519734 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-proxy-ca-bundles\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.524339 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-serving-cert\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.534425 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpg9v\" (UniqueName: \"kubernetes.io/projected/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-kube-api-access-jpg9v\") pod \"controller-manager-74695d59-ptx55\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.562711 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.630697 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.718370 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/915725ae-1097-4499-a143-bc1355edd31b-kubelet-dir\") pod \"915725ae-1097-4499-a143-bc1355edd31b\" (UID: \"915725ae-1097-4499-a143-bc1355edd31b\") " Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.718471 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/915725ae-1097-4499-a143-bc1355edd31b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "915725ae-1097-4499-a143-bc1355edd31b" (UID: "915725ae-1097-4499-a143-bc1355edd31b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.718771 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/915725ae-1097-4499-a143-bc1355edd31b-kube-api-access\") pod \"915725ae-1097-4499-a143-bc1355edd31b\" (UID: \"915725ae-1097-4499-a143-bc1355edd31b\") " Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.719161 4804 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/915725ae-1097-4499-a143-bc1355edd31b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.771960 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/915725ae-1097-4499-a143-bc1355edd31b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "915725ae-1097-4499-a143-bc1355edd31b" (UID: "915725ae-1097-4499-a143-bc1355edd31b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:17 crc kubenswrapper[4804]: I0128 11:25:17.820756 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/915725ae-1097-4499-a143-bc1355edd31b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:18 crc kubenswrapper[4804]: I0128 11:25:18.117741 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-74695d59-ptx55"] Jan 28 11:25:18 crc kubenswrapper[4804]: I0128 11:25:18.243616 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 28 11:25:18 crc kubenswrapper[4804]: I0128 11:25:18.243653 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"915725ae-1097-4499-a143-bc1355edd31b","Type":"ContainerDied","Data":"5c0b0319c179b4a20958089ee62da80f519137ecc33dda002ad8302642312986"} Jan 28 11:25:18 crc kubenswrapper[4804]: I0128 11:25:18.243706 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c0b0319c179b4a20958089ee62da80f519137ecc33dda002ad8302642312986" Jan 28 11:25:18 crc kubenswrapper[4804]: I0128 11:25:18.245631 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" event={"ID":"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea","Type":"ContainerStarted","Data":"9fd73d399deb0cba0c34dd2f8c6fce22c13b39693edc3b28ad57bd6a22ed7ddb"} Jan 28 11:25:19 crc kubenswrapper[4804]: I0128 11:25:19.161966 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-44lsd"] Jan 28 11:25:19 crc kubenswrapper[4804]: I0128 11:25:19.263938 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" event={"ID":"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea","Type":"ContainerStarted","Data":"89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635"} Jan 28 11:25:19 crc kubenswrapper[4804]: I0128 11:25:19.264719 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:19 crc kubenswrapper[4804]: I0128 11:25:19.273247 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:19 crc kubenswrapper[4804]: I0128 11:25:19.339607 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" podStartSLOduration=9.339591155 podStartE2EDuration="9.339591155s" podCreationTimestamp="2026-01-28 11:25:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:25:19.297050721 +0000 UTC m=+195.091930705" watchObservedRunningTime="2026-01-28 11:25:19.339591155 +0000 UTC m=+195.134471139" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.182865 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 11:25:21 crc kubenswrapper[4804]: E0128 11:25:21.183467 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="915725ae-1097-4499-a143-bc1355edd31b" containerName="pruner" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.183483 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="915725ae-1097-4499-a143-bc1355edd31b" containerName="pruner" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.183571 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="915725ae-1097-4499-a143-bc1355edd31b" containerName="pruner" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.183948 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.189328 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.189641 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.200464 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.266078 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.266131 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-var-lock\") pod \"installer-9-crc\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.266198 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b357b6a6-77f2-483a-8689-9ec35a8d3008-kube-api-access\") pod \"installer-9-crc\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.367910 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.367994 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-var-lock\") pod \"installer-9-crc\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.368019 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-kubelet-dir\") pod \"installer-9-crc\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.368059 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b357b6a6-77f2-483a-8689-9ec35a8d3008-kube-api-access\") pod \"installer-9-crc\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.368078 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-var-lock\") pod \"installer-9-crc\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.386546 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b357b6a6-77f2-483a-8689-9ec35a8d3008-kube-api-access\") pod \"installer-9-crc\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.530427 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:25:21 crc kubenswrapper[4804]: I0128 11:25:21.912143 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 28 11:25:22 crc kubenswrapper[4804]: I0128 11:25:22.281494 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b357b6a6-77f2-483a-8689-9ec35a8d3008","Type":"ContainerStarted","Data":"1f7f9ceafdf7d00d9bfd7448074f1a52a2999efacee1059cdf48132d46ccbaba"} Jan 28 11:25:22 crc kubenswrapper[4804]: I0128 11:25:22.780958 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:25:22 crc kubenswrapper[4804]: I0128 11:25:22.781341 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:25:23 crc kubenswrapper[4804]: I0128 11:25:23.061185 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:25:23 crc kubenswrapper[4804]: I0128 11:25:23.206874 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:25:23 crc kubenswrapper[4804]: I0128 11:25:23.206954 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:25:23 crc kubenswrapper[4804]: I0128 11:25:23.248218 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:25:23 crc kubenswrapper[4804]: I0128 11:25:23.290409 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b357b6a6-77f2-483a-8689-9ec35a8d3008","Type":"ContainerStarted","Data":"f527c2fa450cb1d21059874ecde9cc59de23295afb4043919e5157ab805c5185"} Jan 28 11:25:23 crc kubenswrapper[4804]: I0128 11:25:23.318532 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=2.318501835 podStartE2EDuration="2.318501835s" podCreationTimestamp="2026-01-28 11:25:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:25:23.306720104 +0000 UTC m=+199.101600088" watchObservedRunningTime="2026-01-28 11:25:23.318501835 +0000 UTC m=+199.113381819" Jan 28 11:25:23 crc kubenswrapper[4804]: I0128 11:25:23.328364 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:25:23 crc kubenswrapper[4804]: I0128 11:25:23.339601 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:25:24 crc kubenswrapper[4804]: I0128 11:25:24.361715 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kvdtx"] Jan 28 11:25:24 crc kubenswrapper[4804]: I0128 11:25:24.890342 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:25:24 crc kubenswrapper[4804]: I0128 11:25:24.890402 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:25:24 crc kubenswrapper[4804]: I0128 11:25:24.931199 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:25:25 crc kubenswrapper[4804]: I0128 11:25:25.300803 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-kvdtx" podUID="4ad471e3-4346-4464-94bf-778299801fe4" containerName="registry-server" containerID="cri-o://c68ce6087a718ae99e3cc4463f7573abafe2ce84992961e303b7908a2d114381" gracePeriod=2 Jan 28 11:25:25 crc kubenswrapper[4804]: I0128 11:25:25.343058 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:25:26 crc kubenswrapper[4804]: I0128 11:25:26.307693 4804 generic.go:334] "Generic (PLEG): container finished" podID="4ad471e3-4346-4464-94bf-778299801fe4" containerID="c68ce6087a718ae99e3cc4463f7573abafe2ce84992961e303b7908a2d114381" exitCode=0 Jan 28 11:25:26 crc kubenswrapper[4804]: I0128 11:25:26.307834 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvdtx" event={"ID":"4ad471e3-4346-4464-94bf-778299801fe4","Type":"ContainerDied","Data":"c68ce6087a718ae99e3cc4463f7573abafe2ce84992961e303b7908a2d114381"} Jan 28 11:25:26 crc kubenswrapper[4804]: I0128 11:25:26.967036 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.074330 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-catalog-content\") pod \"4ad471e3-4346-4464-94bf-778299801fe4\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.074494 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wwt7\" (UniqueName: \"kubernetes.io/projected/4ad471e3-4346-4464-94bf-778299801fe4-kube-api-access-9wwt7\") pod \"4ad471e3-4346-4464-94bf-778299801fe4\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.074564 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-utilities\") pod \"4ad471e3-4346-4464-94bf-778299801fe4\" (UID: \"4ad471e3-4346-4464-94bf-778299801fe4\") " Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.075375 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-utilities" (OuterVolumeSpecName: "utilities") pod "4ad471e3-4346-4464-94bf-778299801fe4" (UID: "4ad471e3-4346-4464-94bf-778299801fe4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.083078 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ad471e3-4346-4464-94bf-778299801fe4-kube-api-access-9wwt7" (OuterVolumeSpecName: "kube-api-access-9wwt7") pod "4ad471e3-4346-4464-94bf-778299801fe4" (UID: "4ad471e3-4346-4464-94bf-778299801fe4"). InnerVolumeSpecName "kube-api-access-9wwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.135070 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4ad471e3-4346-4464-94bf-778299801fe4" (UID: "4ad471e3-4346-4464-94bf-778299801fe4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.176317 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wwt7\" (UniqueName: \"kubernetes.io/projected/4ad471e3-4346-4464-94bf-778299801fe4-kube-api-access-9wwt7\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.176350 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.176359 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4ad471e3-4346-4464-94bf-778299801fe4-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.317327 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-kvdtx" event={"ID":"4ad471e3-4346-4464-94bf-778299801fe4","Type":"ContainerDied","Data":"f2d704f75cce250d039d0dd04e24016c6014cdedb092df3fd7df1955f57ab50a"} Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.317371 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-kvdtx" Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.317403 4804 scope.go:117] "RemoveContainer" containerID="c68ce6087a718ae99e3cc4463f7573abafe2ce84992961e303b7908a2d114381" Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.339397 4804 scope.go:117] "RemoveContainer" containerID="97664e8d1984615a65d446a8bc46d2bb67c1945d32796a3d76d2a216b0e0b130" Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.356142 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-kvdtx"] Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.360309 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-kvdtx"] Jan 28 11:25:27 crc kubenswrapper[4804]: I0128 11:25:27.361027 4804 scope.go:117] "RemoveContainer" containerID="46ebddf77e338edea495290c557790d95f2de2df53a4e7134b3e39d453fa17af" Jan 28 11:25:28 crc kubenswrapper[4804]: I0128 11:25:28.325186 4804 generic.go:334] "Generic (PLEG): container finished" podID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" containerID="f6b561dfd74bd0608fe5e5715082f1748f705a8d3c70b56213a9e9dd71a73129" exitCode=0 Jan 28 11:25:28 crc kubenswrapper[4804]: I0128 11:25:28.325267 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw5tb" event={"ID":"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d","Type":"ContainerDied","Data":"f6b561dfd74bd0608fe5e5715082f1748f705a8d3c70b56213a9e9dd71a73129"} Jan 28 11:25:28 crc kubenswrapper[4804]: I0128 11:25:28.329317 4804 generic.go:334] "Generic (PLEG): container finished" podID="759bdf85-0cca-46db-8126-fab61a8664a8" containerID="6ed7b2314a3a1de8215a831da8585b1d51d1cb76d9f737d5af3122126b35700c" exitCode=0 Jan 28 11:25:28 crc kubenswrapper[4804]: I0128 11:25:28.329371 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nw6s2" event={"ID":"759bdf85-0cca-46db-8126-fab61a8664a8","Type":"ContainerDied","Data":"6ed7b2314a3a1de8215a831da8585b1d51d1cb76d9f737d5af3122126b35700c"} Jan 28 11:25:28 crc kubenswrapper[4804]: I0128 11:25:28.336867 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmw4q" event={"ID":"b641b655-0d3e-4838-8c87-fc72873f1944","Type":"ContainerStarted","Data":"7725654f9e2f3db24252d95301f4512ca56872a844c3f809462e7438542a69f4"} Jan 28 11:25:28 crc kubenswrapper[4804]: I0128 11:25:28.920860 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ad471e3-4346-4464-94bf-778299801fe4" path="/var/lib/kubelet/pods/4ad471e3-4346-4464-94bf-778299801fe4/volumes" Jan 28 11:25:29 crc kubenswrapper[4804]: I0128 11:25:29.343854 4804 generic.go:334] "Generic (PLEG): container finished" podID="b641b655-0d3e-4838-8c87-fc72873f1944" containerID="7725654f9e2f3db24252d95301f4512ca56872a844c3f809462e7438542a69f4" exitCode=0 Jan 28 11:25:29 crc kubenswrapper[4804]: I0128 11:25:29.343920 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmw4q" event={"ID":"b641b655-0d3e-4838-8c87-fc72873f1944","Type":"ContainerDied","Data":"7725654f9e2f3db24252d95301f4512ca56872a844c3f809462e7438542a69f4"} Jan 28 11:25:30 crc kubenswrapper[4804]: I0128 11:25:30.301843 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74695d59-ptx55"] Jan 28 11:25:30 crc kubenswrapper[4804]: I0128 11:25:30.302529 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" podUID="312f1c7a-2ac0-4e79-ba51-abf07c7f04ea" containerName="controller-manager" containerID="cri-o://89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635" gracePeriod=30 Jan 28 11:25:30 crc kubenswrapper[4804]: I0128 11:25:30.318348 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg"] Jan 28 11:25:30 crc kubenswrapper[4804]: I0128 11:25:30.318557 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" podUID="75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9" containerName="route-controller-manager" containerID="cri-o://0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b" gracePeriod=30 Jan 28 11:25:30 crc kubenswrapper[4804]: I0128 11:25:30.352653 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmw4q" event={"ID":"b641b655-0d3e-4838-8c87-fc72873f1944","Type":"ContainerStarted","Data":"631aa94b77086e59cc4974535410f633d8a570238e4eb3b9012dadf08b6ae781"} Jan 28 11:25:30 crc kubenswrapper[4804]: I0128 11:25:30.355225 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nw6s2" event={"ID":"759bdf85-0cca-46db-8126-fab61a8664a8","Type":"ContainerStarted","Data":"bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c"} Jan 28 11:25:30 crc kubenswrapper[4804]: I0128 11:25:30.357597 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48gg7" event={"ID":"23f32834-88e4-454d-81fe-6370a2bc8e0b","Type":"ContainerStarted","Data":"85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc"} Jan 28 11:25:30 crc kubenswrapper[4804]: I0128 11:25:30.392995 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jmw4q" podStartSLOduration=2.060841935 podStartE2EDuration="55.392974702s" podCreationTimestamp="2026-01-28 11:24:35 +0000 UTC" firstStartedPulling="2026-01-28 11:24:36.492452489 +0000 UTC m=+152.287332473" lastFinishedPulling="2026-01-28 11:25:29.824585256 +0000 UTC m=+205.619465240" observedRunningTime="2026-01-28 11:25:30.387734659 +0000 UTC m=+206.182614653" watchObservedRunningTime="2026-01-28 11:25:30.392974702 +0000 UTC m=+206.187854686" Jan 28 11:25:30 crc kubenswrapper[4804]: I0128 11:25:30.424956 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-nw6s2" podStartSLOduration=3.031664682 podStartE2EDuration="55.424940223s" podCreationTimestamp="2026-01-28 11:24:35 +0000 UTC" firstStartedPulling="2026-01-28 11:24:37.646240515 +0000 UTC m=+153.441120499" lastFinishedPulling="2026-01-28 11:25:30.039516056 +0000 UTC m=+205.834396040" observedRunningTime="2026-01-28 11:25:30.423294718 +0000 UTC m=+206.218174702" watchObservedRunningTime="2026-01-28 11:25:30.424940223 +0000 UTC m=+206.219820207" Jan 28 11:25:30 crc kubenswrapper[4804]: I0128 11:25:30.970806 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.005158 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.140780 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vb8g\" (UniqueName: \"kubernetes.io/projected/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-kube-api-access-8vb8g\") pod \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.140834 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-serving-cert\") pod \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.140863 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-client-ca\") pod \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.140937 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-serving-cert\") pod \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.140971 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jpg9v\" (UniqueName: \"kubernetes.io/projected/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-kube-api-access-jpg9v\") pod \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.141011 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-config\") pod \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.141041 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-config\") pod \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\" (UID: \"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9\") " Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.141088 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-client-ca\") pod \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.141123 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-proxy-ca-bundles\") pod \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\" (UID: \"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea\") " Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.142335 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "312f1c7a-2ac0-4e79-ba51-abf07c7f04ea" (UID: "312f1c7a-2ac0-4e79-ba51-abf07c7f04ea"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.142784 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-config" (OuterVolumeSpecName: "config") pod "312f1c7a-2ac0-4e79-ba51-abf07c7f04ea" (UID: "312f1c7a-2ac0-4e79-ba51-abf07c7f04ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.143053 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-client-ca" (OuterVolumeSpecName: "client-ca") pod "75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9" (UID: "75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.143182 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-config" (OuterVolumeSpecName: "config") pod "75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9" (UID: "75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.143638 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-client-ca" (OuterVolumeSpecName: "client-ca") pod "312f1c7a-2ac0-4e79-ba51-abf07c7f04ea" (UID: "312f1c7a-2ac0-4e79-ba51-abf07c7f04ea"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.148216 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-kube-api-access-jpg9v" (OuterVolumeSpecName: "kube-api-access-jpg9v") pod "312f1c7a-2ac0-4e79-ba51-abf07c7f04ea" (UID: "312f1c7a-2ac0-4e79-ba51-abf07c7f04ea"). InnerVolumeSpecName "kube-api-access-jpg9v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.164547 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9" (UID: "75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.164801 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "312f1c7a-2ac0-4e79-ba51-abf07c7f04ea" (UID: "312f1c7a-2ac0-4e79-ba51-abf07c7f04ea"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.167244 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-kube-api-access-8vb8g" (OuterVolumeSpecName: "kube-api-access-8vb8g") pod "75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9" (UID: "75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9"). InnerVolumeSpecName "kube-api-access-8vb8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.243032 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vb8g\" (UniqueName: \"kubernetes.io/projected/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-kube-api-access-8vb8g\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.243300 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.243380 4804 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.243453 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.243527 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jpg9v\" (UniqueName: \"kubernetes.io/projected/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-kube-api-access-jpg9v\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.243586 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.243809 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.243930 4804 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.244053 4804 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.366371 4804 generic.go:334] "Generic (PLEG): container finished" podID="23f32834-88e4-454d-81fe-6370a2bc8e0b" containerID="85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc" exitCode=0 Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.366449 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48gg7" event={"ID":"23f32834-88e4-454d-81fe-6370a2bc8e0b","Type":"ContainerDied","Data":"85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc"} Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.373125 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw5tb" event={"ID":"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d","Type":"ContainerStarted","Data":"fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5"} Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.377476 4804 generic.go:334] "Generic (PLEG): container finished" podID="312f1c7a-2ac0-4e79-ba51-abf07c7f04ea" containerID="89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635" exitCode=0 Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.377591 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.377776 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" event={"ID":"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea","Type":"ContainerDied","Data":"89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635"} Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.377901 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-74695d59-ptx55" event={"ID":"312f1c7a-2ac0-4e79-ba51-abf07c7f04ea","Type":"ContainerDied","Data":"9fd73d399deb0cba0c34dd2f8c6fce22c13b39693edc3b28ad57bd6a22ed7ddb"} Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.377994 4804 scope.go:117] "RemoveContainer" containerID="89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.381989 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4842n" event={"ID":"ac859130-1b71-4993-ab3d-66600459a32a","Type":"ContainerStarted","Data":"f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2"} Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.385244 4804 generic.go:334] "Generic (PLEG): container finished" podID="75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9" containerID="0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b" exitCode=0 Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.385433 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" event={"ID":"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9","Type":"ContainerDied","Data":"0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b"} Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.385517 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" event={"ID":"75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9","Type":"ContainerDied","Data":"9aadef1b32585caba0ba9cf8ffb717fece86708962396590bf0f0d48c279556e"} Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.385587 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.406786 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gw5tb" podStartSLOduration=4.380302664 podStartE2EDuration="59.406769616s" podCreationTimestamp="2026-01-28 11:24:32 +0000 UTC" firstStartedPulling="2026-01-28 11:24:35.429966486 +0000 UTC m=+151.224846470" lastFinishedPulling="2026-01-28 11:25:30.456433438 +0000 UTC m=+206.251313422" observedRunningTime="2026-01-28 11:25:31.406391703 +0000 UTC m=+207.201271687" watchObservedRunningTime="2026-01-28 11:25:31.406769616 +0000 UTC m=+207.201649600" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.429840 4804 scope.go:117] "RemoveContainer" containerID="89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635" Jan 28 11:25:31 crc kubenswrapper[4804]: E0128 11:25:31.430532 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635\": container with ID starting with 89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635 not found: ID does not exist" containerID="89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.430575 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635"} err="failed to get container status \"89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635\": rpc error: code = NotFound desc = could not find container \"89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635\": container with ID starting with 89b4ba19a83340468daebd0e9bf92bf739e19314373c879173994b574c5e1635 not found: ID does not exist" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.430623 4804 scope.go:117] "RemoveContainer" containerID="0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.441975 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-74695d59-ptx55"] Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.450599 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-74695d59-ptx55"] Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.455950 4804 scope.go:117] "RemoveContainer" containerID="0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b" Jan 28 11:25:31 crc kubenswrapper[4804]: E0128 11:25:31.456414 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b\": container with ID starting with 0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b not found: ID does not exist" containerID="0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.456454 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b"} err="failed to get container status \"0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b\": rpc error: code = NotFound desc = could not find container \"0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b\": container with ID starting with 0b1e905d7f0d2ac717e6b56fae7cf2ce732e0eba806cfccfe7f7c1d6cd00c64b not found: ID does not exist" Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.460962 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg"] Jan 28 11:25:31 crc kubenswrapper[4804]: I0128 11:25:31.464587 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-c549dd98f-pm2jg"] Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.317210 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-56fdbb7f67-z2wch"] Jan 28 11:25:32 crc kubenswrapper[4804]: E0128 11:25:32.317470 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad471e3-4346-4464-94bf-778299801fe4" containerName="extract-utilities" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.317488 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad471e3-4346-4464-94bf-778299801fe4" containerName="extract-utilities" Jan 28 11:25:32 crc kubenswrapper[4804]: E0128 11:25:32.317501 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad471e3-4346-4464-94bf-778299801fe4" containerName="registry-server" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.317509 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad471e3-4346-4464-94bf-778299801fe4" containerName="registry-server" Jan 28 11:25:32 crc kubenswrapper[4804]: E0128 11:25:32.317519 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="312f1c7a-2ac0-4e79-ba51-abf07c7f04ea" containerName="controller-manager" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.317528 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="312f1c7a-2ac0-4e79-ba51-abf07c7f04ea" containerName="controller-manager" Jan 28 11:25:32 crc kubenswrapper[4804]: E0128 11:25:32.317543 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ad471e3-4346-4464-94bf-778299801fe4" containerName="extract-content" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.317551 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ad471e3-4346-4464-94bf-778299801fe4" containerName="extract-content" Jan 28 11:25:32 crc kubenswrapper[4804]: E0128 11:25:32.317570 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9" containerName="route-controller-manager" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.317581 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9" containerName="route-controller-manager" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.317711 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ad471e3-4346-4464-94bf-778299801fe4" containerName="registry-server" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.317730 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="312f1c7a-2ac0-4e79-ba51-abf07c7f04ea" containerName="controller-manager" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.317744 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9" containerName="route-controller-manager" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.318216 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.321778 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl"] Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.321915 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.321968 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.322197 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.322244 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.322367 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.322459 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.322525 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: W0128 11:25:32.326393 4804 reflector.go:561] object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2": failed to list *v1.Secret: secrets "route-controller-manager-sa-dockercfg-h2zr2" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 28 11:25:32 crc kubenswrapper[4804]: E0128 11:25:32.326448 4804 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-h2zr2\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"route-controller-manager-sa-dockercfg-h2zr2\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.326634 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 11:25:32 crc kubenswrapper[4804]: W0128 11:25:32.326668 4804 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 28 11:25:32 crc kubenswrapper[4804]: E0128 11:25:32.326696 4804 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.326813 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.326966 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.328811 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.332075 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.334278 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56fdbb7f67-z2wch"] Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.355180 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl"] Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.391459 4804 generic.go:334] "Generic (PLEG): container finished" podID="ac859130-1b71-4993-ab3d-66600459a32a" containerID="f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2" exitCode=0 Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.391507 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4842n" event={"ID":"ac859130-1b71-4993-ab3d-66600459a32a","Type":"ContainerDied","Data":"f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2"} Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.460365 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-proxy-ca-bundles\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.460471 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-client-ca\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.460521 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-serving-cert\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.460559 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhhvx\" (UniqueName: \"kubernetes.io/projected/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-kube-api-access-jhhvx\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.460589 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-client-ca\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.460613 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-config\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.460630 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-config\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.460659 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrp5k\" (UniqueName: \"kubernetes.io/projected/779944ca-d8be-40c0-89ac-1e1b3208eed2-kube-api-access-nrp5k\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.460697 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/779944ca-d8be-40c0-89ac-1e1b3208eed2-serving-cert\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.551086 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.551149 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.562340 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-client-ca\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.562385 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-config\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.562411 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-config\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.562441 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrp5k\" (UniqueName: \"kubernetes.io/projected/779944ca-d8be-40c0-89ac-1e1b3208eed2-kube-api-access-nrp5k\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.562485 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/779944ca-d8be-40c0-89ac-1e1b3208eed2-serving-cert\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.562512 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-proxy-ca-bundles\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.562538 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-client-ca\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.562566 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-serving-cert\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.562601 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhhvx\" (UniqueName: \"kubernetes.io/projected/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-kube-api-access-jhhvx\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.563583 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-client-ca\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.563583 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-client-ca\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.563817 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-proxy-ca-bundles\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.564088 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-config\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.567341 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-serving-cert\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.567542 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/779944ca-d8be-40c0-89ac-1e1b3208eed2-serving-cert\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.577714 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhhvx\" (UniqueName: \"kubernetes.io/projected/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-kube-api-access-jhhvx\") pod \"controller-manager-56fdbb7f67-z2wch\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.579554 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrp5k\" (UniqueName: \"kubernetes.io/projected/779944ca-d8be-40c0-89ac-1e1b3208eed2-kube-api-access-nrp5k\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.594649 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.640431 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.922338 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="312f1c7a-2ac0-4e79-ba51-abf07c7f04ea" path="/var/lib/kubelet/pods/312f1c7a-2ac0-4e79-ba51-abf07c7f04ea/volumes" Jan 28 11:25:32 crc kubenswrapper[4804]: I0128 11:25:32.923024 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9" path="/var/lib/kubelet/pods/75052041-d7ef-4a05-ac6d-fdbf2f8e2ab9/volumes" Jan 28 11:25:33 crc kubenswrapper[4804]: I0128 11:25:33.066025 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-56fdbb7f67-z2wch"] Jan 28 11:25:33 crc kubenswrapper[4804]: I0128 11:25:33.334014 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 11:25:33 crc kubenswrapper[4804]: I0128 11:25:33.398415 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" event={"ID":"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9","Type":"ContainerStarted","Data":"ea883bc2d51aafd97d5cd59b8bd8970b0e6abb434f296bbe9aa56e76957157f5"} Jan 28 11:25:33 crc kubenswrapper[4804]: I0128 11:25:33.400356 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48gg7" event={"ID":"23f32834-88e4-454d-81fe-6370a2bc8e0b","Type":"ContainerStarted","Data":"67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c"} Jan 28 11:25:33 crc kubenswrapper[4804]: E0128 11:25:33.564286 4804 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Jan 28 11:25:33 crc kubenswrapper[4804]: E0128 11:25:33.564394 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-config podName:779944ca-d8be-40c0-89ac-1e1b3208eed2 nodeName:}" failed. No retries permitted until 2026-01-28 11:25:34.064372264 +0000 UTC m=+209.859252248 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-config") pod "route-controller-manager-7cbb595b88-w8rrl" (UID: "779944ca-d8be-40c0-89ac-1e1b3208eed2") : failed to sync configmap cache: timed out waiting for the condition Jan 28 11:25:33 crc kubenswrapper[4804]: I0128 11:25:33.637238 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 11:25:34 crc kubenswrapper[4804]: I0128 11:25:34.080967 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-config\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:34 crc kubenswrapper[4804]: I0128 11:25:34.082143 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-config\") pod \"route-controller-manager-7cbb595b88-w8rrl\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:34 crc kubenswrapper[4804]: I0128 11:25:34.146602 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:34 crc kubenswrapper[4804]: I0128 11:25:34.425963 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-48gg7" podStartSLOduration=5.917432278 podStartE2EDuration="1m2.425948338s" podCreationTimestamp="2026-01-28 11:24:32 +0000 UTC" firstStartedPulling="2026-01-28 11:24:35.45541721 +0000 UTC m=+151.250297194" lastFinishedPulling="2026-01-28 11:25:31.96393327 +0000 UTC m=+207.758813254" observedRunningTime="2026-01-28 11:25:34.422295286 +0000 UTC m=+210.217175270" watchObservedRunningTime="2026-01-28 11:25:34.425948338 +0000 UTC m=+210.220828322" Jan 28 11:25:35 crc kubenswrapper[4804]: I0128 11:25:35.700899 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:25:35 crc kubenswrapper[4804]: I0128 11:25:35.700975 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:25:35 crc kubenswrapper[4804]: I0128 11:25:35.930120 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:25:35 crc kubenswrapper[4804]: I0128 11:25:35.930169 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:25:36 crc kubenswrapper[4804]: I0128 11:25:36.524503 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl"] Jan 28 11:25:36 crc kubenswrapper[4804]: W0128 11:25:36.536480 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod779944ca_d8be_40c0_89ac_1e1b3208eed2.slice/crio-293582aed8c33e749713ca1eaf41ccd4f918856799864bbaa15b9045968b7da1 WatchSource:0}: Error finding container 293582aed8c33e749713ca1eaf41ccd4f918856799864bbaa15b9045968b7da1: Status 404 returned error can't find the container with id 293582aed8c33e749713ca1eaf41ccd4f918856799864bbaa15b9045968b7da1 Jan 28 11:25:36 crc kubenswrapper[4804]: I0128 11:25:36.739449 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jmw4q" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" containerName="registry-server" probeResult="failure" output=< Jan 28 11:25:36 crc kubenswrapper[4804]: timeout: failed to connect service ":50051" within 1s Jan 28 11:25:36 crc kubenswrapper[4804]: > Jan 28 11:25:36 crc kubenswrapper[4804]: I0128 11:25:36.974108 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-nw6s2" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" containerName="registry-server" probeResult="failure" output=< Jan 28 11:25:36 crc kubenswrapper[4804]: timeout: failed to connect service ":50051" within 1s Jan 28 11:25:36 crc kubenswrapper[4804]: > Jan 28 11:25:37 crc kubenswrapper[4804]: I0128 11:25:37.422253 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" event={"ID":"779944ca-d8be-40c0-89ac-1e1b3208eed2","Type":"ContainerStarted","Data":"9fdf0cdcfdf78c20f986e02639f79a0a492e63a0752874cb68d5057b00ae4186"} Jan 28 11:25:37 crc kubenswrapper[4804]: I0128 11:25:37.422304 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" event={"ID":"779944ca-d8be-40c0-89ac-1e1b3208eed2","Type":"ContainerStarted","Data":"293582aed8c33e749713ca1eaf41ccd4f918856799864bbaa15b9045968b7da1"} Jan 28 11:25:37 crc kubenswrapper[4804]: I0128 11:25:37.422522 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:37 crc kubenswrapper[4804]: I0128 11:25:37.424464 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4842n" event={"ID":"ac859130-1b71-4993-ab3d-66600459a32a","Type":"ContainerStarted","Data":"9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104"} Jan 28 11:25:37 crc kubenswrapper[4804]: I0128 11:25:37.425508 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" event={"ID":"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9","Type":"ContainerStarted","Data":"dcf37b6c911b3e514bb3b3ee27eb41bb185440fd1374b3d272b1617c0225492f"} Jan 28 11:25:37 crc kubenswrapper[4804]: I0128 11:25:37.425905 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:37 crc kubenswrapper[4804]: I0128 11:25:37.427575 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:37 crc kubenswrapper[4804]: I0128 11:25:37.430468 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:37 crc kubenswrapper[4804]: I0128 11:25:37.444721 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" podStartSLOduration=7.444705655 podStartE2EDuration="7.444705655s" podCreationTimestamp="2026-01-28 11:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:25:37.444111486 +0000 UTC m=+213.238991480" watchObservedRunningTime="2026-01-28 11:25:37.444705655 +0000 UTC m=+213.239585639" Jan 28 11:25:37 crc kubenswrapper[4804]: I0128 11:25:37.548442 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" podStartSLOduration=7.5484255860000005 podStartE2EDuration="7.548425586s" podCreationTimestamp="2026-01-28 11:25:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:25:37.525699382 +0000 UTC m=+213.320579366" watchObservedRunningTime="2026-01-28 11:25:37.548425586 +0000 UTC m=+213.343305570" Jan 28 11:25:37 crc kubenswrapper[4804]: I0128 11:25:37.548631 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4842n" podStartSLOduration=4.000733532 podStartE2EDuration="1m3.548627063s" podCreationTimestamp="2026-01-28 11:24:34 +0000 UTC" firstStartedPulling="2026-01-28 11:24:36.530874619 +0000 UTC m=+152.325754603" lastFinishedPulling="2026-01-28 11:25:36.07876815 +0000 UTC m=+211.873648134" observedRunningTime="2026-01-28 11:25:37.545245911 +0000 UTC m=+213.340125895" watchObservedRunningTime="2026-01-28 11:25:37.548627063 +0000 UTC m=+213.343507047" Jan 28 11:25:42 crc kubenswrapper[4804]: I0128 11:25:42.581813 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:25:42 crc kubenswrapper[4804]: I0128 11:25:42.582060 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:25:42 crc kubenswrapper[4804]: I0128 11:25:42.582100 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:25:42 crc kubenswrapper[4804]: I0128 11:25:42.582734 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 11:25:42 crc kubenswrapper[4804]: I0128 11:25:42.582789 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5" gracePeriod=600 Jan 28 11:25:42 crc kubenswrapper[4804]: I0128 11:25:42.592810 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:25:43 crc kubenswrapper[4804]: I0128 11:25:43.056807 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:25:43 crc kubenswrapper[4804]: I0128 11:25:43.056852 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:25:43 crc kubenswrapper[4804]: I0128 11:25:43.104151 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:25:43 crc kubenswrapper[4804]: I0128 11:25:43.472284 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5" exitCode=0 Jan 28 11:25:43 crc kubenswrapper[4804]: I0128 11:25:43.472432 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5"} Jan 28 11:25:43 crc kubenswrapper[4804]: I0128 11:25:43.473121 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"d6bd6423ac842a17ff5659b7f0672fd055e5689dc54e8deaa66167b5157cd76e"} Jan 28 11:25:43 crc kubenswrapper[4804]: I0128 11:25:43.520573 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:25:43 crc kubenswrapper[4804]: I0128 11:25:43.825812 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-48gg7"] Jan 28 11:25:44 crc kubenswrapper[4804]: I0128 11:25:44.204710 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" podUID="5054f20f-444d-40e8-ad18-3515e1ff2638" containerName="oauth-openshift" containerID="cri-o://79c1b2853938dfb6f36ee8ac10844c8a54c903878160557f1d100d526d8ed15d" gracePeriod=15 Jan 28 11:25:44 crc kubenswrapper[4804]: I0128 11:25:44.483767 4804 generic.go:334] "Generic (PLEG): container finished" podID="5054f20f-444d-40e8-ad18-3515e1ff2638" containerID="79c1b2853938dfb6f36ee8ac10844c8a54c903878160557f1d100d526d8ed15d" exitCode=0 Jan 28 11:25:44 crc kubenswrapper[4804]: I0128 11:25:44.483855 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" event={"ID":"5054f20f-444d-40e8-ad18-3515e1ff2638","Type":"ContainerDied","Data":"79c1b2853938dfb6f36ee8ac10844c8a54c903878160557f1d100d526d8ed15d"} Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.221042 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.258615 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-59b95f96cf-ncf7v"] Jan 28 11:25:45 crc kubenswrapper[4804]: E0128 11:25:45.258998 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5054f20f-444d-40e8-ad18-3515e1ff2638" containerName="oauth-openshift" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.259023 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="5054f20f-444d-40e8-ad18-3515e1ff2638" containerName="oauth-openshift" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.259182 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="5054f20f-444d-40e8-ad18-3515e1ff2638" containerName="oauth-openshift" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.259833 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.276856 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-59b95f96cf-ncf7v"] Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323294 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-dir\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323349 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-provider-selection\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323375 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-ocp-branding-template\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323400 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-session\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323427 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-login\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323453 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-serving-cert\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323472 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-service-ca\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323494 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-policies\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323513 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-idp-0-file-data\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323551 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-trusted-ca-bundle\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323574 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-error\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323601 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6vl2p\" (UniqueName: \"kubernetes.io/projected/5054f20f-444d-40e8-ad18-3515e1ff2638-kube-api-access-6vl2p\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323619 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-cliconfig\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.323636 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-router-certs\") pod \"5054f20f-444d-40e8-ad18-3515e1ff2638\" (UID: \"5054f20f-444d-40e8-ad18-3515e1ff2638\") " Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.324338 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.325377 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.325475 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.325470 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.325705 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.330625 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.331117 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.331228 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.331449 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.331459 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5054f20f-444d-40e8-ad18-3515e1ff2638-kube-api-access-6vl2p" (OuterVolumeSpecName: "kube-api-access-6vl2p") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "kube-api-access-6vl2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.331568 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.331737 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.338435 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.339061 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "5054f20f-444d-40e8-ad18-3515e1ff2638" (UID: "5054f20f-444d-40e8-ad18-3515e1ff2638"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.370927 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.371186 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.409151 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.424942 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mq2v\" (UniqueName: \"kubernetes.io/projected/219ecee2-929c-4499-b2d2-47264524ae3f-kube-api-access-9mq2v\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.424998 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-template-login\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425033 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-router-certs\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425053 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425072 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-service-ca\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425101 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425180 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-session\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425208 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425250 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/219ecee2-929c-4499-b2d2-47264524ae3f-audit-dir\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425276 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425305 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425332 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425354 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-audit-policies\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425375 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-template-error\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425421 4804 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425437 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425451 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425462 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425475 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425506 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425516 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425525 4804 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425534 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425543 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425552 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425561 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6vl2p\" (UniqueName: \"kubernetes.io/projected/5054f20f-444d-40e8-ad18-3515e1ff2638-kube-api-access-6vl2p\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425569 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.425579 4804 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/5054f20f-444d-40e8-ad18-3515e1ff2638-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.490462 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" event={"ID":"5054f20f-444d-40e8-ad18-3515e1ff2638","Type":"ContainerDied","Data":"3604a1ed363f990559841eb45c533c06695cdf71dd2d767f1fae173b03ac7671"} Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.490755 4804 scope.go:117] "RemoveContainer" containerID="79c1b2853938dfb6f36ee8ac10844c8a54c903878160557f1d100d526d8ed15d" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.490645 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-48gg7" podUID="23f32834-88e4-454d-81fe-6370a2bc8e0b" containerName="registry-server" containerID="cri-o://67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c" gracePeriod=2 Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.491137 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-44lsd" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.523719 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-44lsd"] Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.524062 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-44lsd"] Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526475 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9mq2v\" (UniqueName: \"kubernetes.io/projected/219ecee2-929c-4499-b2d2-47264524ae3f-kube-api-access-9mq2v\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526514 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-template-login\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526551 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-router-certs\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526576 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526607 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-service-ca\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526664 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526701 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-session\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526723 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526765 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/219ecee2-929c-4499-b2d2-47264524ae3f-audit-dir\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526789 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526814 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526845 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526868 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-audit-policies\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.526911 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-template-error\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.528453 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/219ecee2-929c-4499-b2d2-47264524ae3f-audit-dir\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.530147 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-service-ca\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.530217 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-cliconfig\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.530776 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.530843 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-audit-policies\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.530980 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.531267 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-template-error\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.532540 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.533318 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.534673 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-serving-cert\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.535746 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-session\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.536470 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-user-template-login\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.538103 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/219ecee2-929c-4499-b2d2-47264524ae3f-v4-0-config-system-router-certs\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.541703 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.545898 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9mq2v\" (UniqueName: \"kubernetes.io/projected/219ecee2-929c-4499-b2d2-47264524ae3f-kube-api-access-9mq2v\") pod \"oauth-openshift-59b95f96cf-ncf7v\" (UID: \"219ecee2-929c-4499-b2d2-47264524ae3f\") " pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.586036 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.748222 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.786814 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.914040 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:25:45 crc kubenswrapper[4804]: I0128 11:25:45.965051 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.004933 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.035674 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-catalog-content\") pod \"23f32834-88e4-454d-81fe-6370a2bc8e0b\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.035811 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6sc7\" (UniqueName: \"kubernetes.io/projected/23f32834-88e4-454d-81fe-6370a2bc8e0b-kube-api-access-l6sc7\") pod \"23f32834-88e4-454d-81fe-6370a2bc8e0b\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.035924 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-utilities\") pod \"23f32834-88e4-454d-81fe-6370a2bc8e0b\" (UID: \"23f32834-88e4-454d-81fe-6370a2bc8e0b\") " Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.038906 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-utilities" (OuterVolumeSpecName: "utilities") pod "23f32834-88e4-454d-81fe-6370a2bc8e0b" (UID: "23f32834-88e4-454d-81fe-6370a2bc8e0b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.046451 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/23f32834-88e4-454d-81fe-6370a2bc8e0b-kube-api-access-l6sc7" (OuterVolumeSpecName: "kube-api-access-l6sc7") pod "23f32834-88e4-454d-81fe-6370a2bc8e0b" (UID: "23f32834-88e4-454d-81fe-6370a2bc8e0b"). InnerVolumeSpecName "kube-api-access-l6sc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.050334 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-59b95f96cf-ncf7v"] Jan 28 11:25:46 crc kubenswrapper[4804]: W0128 11:25:46.058343 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod219ecee2_929c_4499_b2d2_47264524ae3f.slice/crio-f70ae130617f839c42825f641a1b8bb09974c627bed09aed92e2e190a7691f5c WatchSource:0}: Error finding container f70ae130617f839c42825f641a1b8bb09974c627bed09aed92e2e190a7691f5c: Status 404 returned error can't find the container with id f70ae130617f839c42825f641a1b8bb09974c627bed09aed92e2e190a7691f5c Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.089441 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "23f32834-88e4-454d-81fe-6370a2bc8e0b" (UID: "23f32834-88e4-454d-81fe-6370a2bc8e0b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.137805 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.137836 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/23f32834-88e4-454d-81fe-6370a2bc8e0b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.137851 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6sc7\" (UniqueName: \"kubernetes.io/projected/23f32834-88e4-454d-81fe-6370a2bc8e0b-kube-api-access-l6sc7\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.508679 4804 generic.go:334] "Generic (PLEG): container finished" podID="23f32834-88e4-454d-81fe-6370a2bc8e0b" containerID="67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c" exitCode=0 Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.508774 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-48gg7" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.508773 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48gg7" event={"ID":"23f32834-88e4-454d-81fe-6370a2bc8e0b","Type":"ContainerDied","Data":"67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c"} Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.509651 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-48gg7" event={"ID":"23f32834-88e4-454d-81fe-6370a2bc8e0b","Type":"ContainerDied","Data":"306a58f4bdfd74cc31f69b2bdc88525986d7ff5e31a732a9de2866902df8686e"} Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.509700 4804 scope.go:117] "RemoveContainer" containerID="67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.515352 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" event={"ID":"219ecee2-929c-4499-b2d2-47264524ae3f","Type":"ContainerStarted","Data":"17823d02e7f4945c97550eea1936a3f942b18a3ba5da12edf1c5d10a067f903a"} Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.515479 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" event={"ID":"219ecee2-929c-4499-b2d2-47264524ae3f","Type":"ContainerStarted","Data":"f70ae130617f839c42825f641a1b8bb09974c627bed09aed92e2e190a7691f5c"} Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.536067 4804 scope.go:117] "RemoveContainer" containerID="85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.558804 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" podStartSLOduration=27.558765887 podStartE2EDuration="27.558765887s" podCreationTimestamp="2026-01-28 11:25:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:25:46.555780928 +0000 UTC m=+222.350661002" watchObservedRunningTime="2026-01-28 11:25:46.558765887 +0000 UTC m=+222.353645911" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.574837 4804 scope.go:117] "RemoveContainer" containerID="3ebc683a6a62cde177d2a384e6ed4541311004d2bb8f30a4d59923d6c4003f98" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.592775 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-48gg7"] Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.598696 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-48gg7"] Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.612474 4804 scope.go:117] "RemoveContainer" containerID="67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c" Jan 28 11:25:46 crc kubenswrapper[4804]: E0128 11:25:46.613832 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c\": container with ID starting with 67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c not found: ID does not exist" containerID="67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.613923 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c"} err="failed to get container status \"67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c\": rpc error: code = NotFound desc = could not find container \"67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c\": container with ID starting with 67e974bc81ca39f29ccb1d2cc1cc0a73d944f6624f2c02c5fcdbf1abd33e525c not found: ID does not exist" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.613960 4804 scope.go:117] "RemoveContainer" containerID="85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc" Jan 28 11:25:46 crc kubenswrapper[4804]: E0128 11:25:46.615033 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc\": container with ID starting with 85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc not found: ID does not exist" containerID="85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.615150 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc"} err="failed to get container status \"85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc\": rpc error: code = NotFound desc = could not find container \"85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc\": container with ID starting with 85f0230cd9fce220be05cadb65d503607dda9c4a241e0c4df1fa7643aa4071bc not found: ID does not exist" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.615181 4804 scope.go:117] "RemoveContainer" containerID="3ebc683a6a62cde177d2a384e6ed4541311004d2bb8f30a4d59923d6c4003f98" Jan 28 11:25:46 crc kubenswrapper[4804]: E0128 11:25:46.615771 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3ebc683a6a62cde177d2a384e6ed4541311004d2bb8f30a4d59923d6c4003f98\": container with ID starting with 3ebc683a6a62cde177d2a384e6ed4541311004d2bb8f30a4d59923d6c4003f98 not found: ID does not exist" containerID="3ebc683a6a62cde177d2a384e6ed4541311004d2bb8f30a4d59923d6c4003f98" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.615855 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3ebc683a6a62cde177d2a384e6ed4541311004d2bb8f30a4d59923d6c4003f98"} err="failed to get container status \"3ebc683a6a62cde177d2a384e6ed4541311004d2bb8f30a4d59923d6c4003f98\": rpc error: code = NotFound desc = could not find container \"3ebc683a6a62cde177d2a384e6ed4541311004d2bb8f30a4d59923d6c4003f98\": container with ID starting with 3ebc683a6a62cde177d2a384e6ed4541311004d2bb8f30a4d59923d6c4003f98 not found: ID does not exist" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.629405 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4842n"] Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.924293 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="23f32834-88e4-454d-81fe-6370a2bc8e0b" path="/var/lib/kubelet/pods/23f32834-88e4-454d-81fe-6370a2bc8e0b/volumes" Jan 28 11:25:46 crc kubenswrapper[4804]: I0128 11:25:46.925716 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5054f20f-444d-40e8-ad18-3515e1ff2638" path="/var/lib/kubelet/pods/5054f20f-444d-40e8-ad18-3515e1ff2638/volumes" Jan 28 11:25:47 crc kubenswrapper[4804]: I0128 11:25:47.524421 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:47 crc kubenswrapper[4804]: I0128 11:25:47.535649 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-59b95f96cf-ncf7v" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.028707 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nw6s2"] Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.029097 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-nw6s2" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" containerName="registry-server" containerID="cri-o://bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c" gracePeriod=2 Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.485803 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.530448 4804 generic.go:334] "Generic (PLEG): container finished" podID="759bdf85-0cca-46db-8126-fab61a8664a8" containerID="bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c" exitCode=0 Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.530510 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-nw6s2" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.530591 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nw6s2" event={"ID":"759bdf85-0cca-46db-8126-fab61a8664a8","Type":"ContainerDied","Data":"bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c"} Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.530631 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-nw6s2" event={"ID":"759bdf85-0cca-46db-8126-fab61a8664a8","Type":"ContainerDied","Data":"8508960517ab52e83d2de6d52c76bf4bc148c42531ea9ecd0a9fb9ecc845cace"} Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.530653 4804 scope.go:117] "RemoveContainer" containerID="bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.531201 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-4842n" podUID="ac859130-1b71-4993-ab3d-66600459a32a" containerName="registry-server" containerID="cri-o://9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104" gracePeriod=2 Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.549618 4804 scope.go:117] "RemoveContainer" containerID="6ed7b2314a3a1de8215a831da8585b1d51d1cb76d9f737d5af3122126b35700c" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.563577 4804 scope.go:117] "RemoveContainer" containerID="7a655166cb98b396df033464bb0153ad8a2d69479f2ec38efe53356e143f44dc" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.575118 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9n2zg\" (UniqueName: \"kubernetes.io/projected/759bdf85-0cca-46db-8126-fab61a8664a8-kube-api-access-9n2zg\") pod \"759bdf85-0cca-46db-8126-fab61a8664a8\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.575178 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-catalog-content\") pod \"759bdf85-0cca-46db-8126-fab61a8664a8\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.575217 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-utilities\") pod \"759bdf85-0cca-46db-8126-fab61a8664a8\" (UID: \"759bdf85-0cca-46db-8126-fab61a8664a8\") " Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.576487 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-utilities" (OuterVolumeSpecName: "utilities") pod "759bdf85-0cca-46db-8126-fab61a8664a8" (UID: "759bdf85-0cca-46db-8126-fab61a8664a8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.580730 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/759bdf85-0cca-46db-8126-fab61a8664a8-kube-api-access-9n2zg" (OuterVolumeSpecName: "kube-api-access-9n2zg") pod "759bdf85-0cca-46db-8126-fab61a8664a8" (UID: "759bdf85-0cca-46db-8126-fab61a8664a8"). InnerVolumeSpecName "kube-api-access-9n2zg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.655274 4804 scope.go:117] "RemoveContainer" containerID="bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c" Jan 28 11:25:48 crc kubenswrapper[4804]: E0128 11:25:48.655822 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c\": container with ID starting with bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c not found: ID does not exist" containerID="bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.655862 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c"} err="failed to get container status \"bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c\": rpc error: code = NotFound desc = could not find container \"bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c\": container with ID starting with bb09bcf638c9cddc210828ccf98afcd74b25f4bf45a67e26f5ed9b72ff5fbc2c not found: ID does not exist" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.655916 4804 scope.go:117] "RemoveContainer" containerID="6ed7b2314a3a1de8215a831da8585b1d51d1cb76d9f737d5af3122126b35700c" Jan 28 11:25:48 crc kubenswrapper[4804]: E0128 11:25:48.656308 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ed7b2314a3a1de8215a831da8585b1d51d1cb76d9f737d5af3122126b35700c\": container with ID starting with 6ed7b2314a3a1de8215a831da8585b1d51d1cb76d9f737d5af3122126b35700c not found: ID does not exist" containerID="6ed7b2314a3a1de8215a831da8585b1d51d1cb76d9f737d5af3122126b35700c" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.656344 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ed7b2314a3a1de8215a831da8585b1d51d1cb76d9f737d5af3122126b35700c"} err="failed to get container status \"6ed7b2314a3a1de8215a831da8585b1d51d1cb76d9f737d5af3122126b35700c\": rpc error: code = NotFound desc = could not find container \"6ed7b2314a3a1de8215a831da8585b1d51d1cb76d9f737d5af3122126b35700c\": container with ID starting with 6ed7b2314a3a1de8215a831da8585b1d51d1cb76d9f737d5af3122126b35700c not found: ID does not exist" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.656371 4804 scope.go:117] "RemoveContainer" containerID="7a655166cb98b396df033464bb0153ad8a2d69479f2ec38efe53356e143f44dc" Jan 28 11:25:48 crc kubenswrapper[4804]: E0128 11:25:48.656764 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a655166cb98b396df033464bb0153ad8a2d69479f2ec38efe53356e143f44dc\": container with ID starting with 7a655166cb98b396df033464bb0153ad8a2d69479f2ec38efe53356e143f44dc not found: ID does not exist" containerID="7a655166cb98b396df033464bb0153ad8a2d69479f2ec38efe53356e143f44dc" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.656793 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a655166cb98b396df033464bb0153ad8a2d69479f2ec38efe53356e143f44dc"} err="failed to get container status \"7a655166cb98b396df033464bb0153ad8a2d69479f2ec38efe53356e143f44dc\": rpc error: code = NotFound desc = could not find container \"7a655166cb98b396df033464bb0153ad8a2d69479f2ec38efe53356e143f44dc\": container with ID starting with 7a655166cb98b396df033464bb0153ad8a2d69479f2ec38efe53356e143f44dc not found: ID does not exist" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.678769 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9n2zg\" (UniqueName: \"kubernetes.io/projected/759bdf85-0cca-46db-8126-fab61a8664a8-kube-api-access-9n2zg\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.678806 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.732309 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "759bdf85-0cca-46db-8126-fab61a8664a8" (UID: "759bdf85-0cca-46db-8126-fab61a8664a8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.780330 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/759bdf85-0cca-46db-8126-fab61a8664a8-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.859716 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-nw6s2"] Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.862348 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-nw6s2"] Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.921049 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" path="/var/lib/kubelet/pods/759bdf85-0cca-46db-8126-fab61a8664a8/volumes" Jan 28 11:25:48 crc kubenswrapper[4804]: I0128 11:25:48.955388 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.083675 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-catalog-content\") pod \"ac859130-1b71-4993-ab3d-66600459a32a\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.083745 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4gn5v\" (UniqueName: \"kubernetes.io/projected/ac859130-1b71-4993-ab3d-66600459a32a-kube-api-access-4gn5v\") pod \"ac859130-1b71-4993-ab3d-66600459a32a\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.083788 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-utilities\") pod \"ac859130-1b71-4993-ab3d-66600459a32a\" (UID: \"ac859130-1b71-4993-ab3d-66600459a32a\") " Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.084647 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-utilities" (OuterVolumeSpecName: "utilities") pod "ac859130-1b71-4993-ab3d-66600459a32a" (UID: "ac859130-1b71-4993-ab3d-66600459a32a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.087750 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac859130-1b71-4993-ab3d-66600459a32a-kube-api-access-4gn5v" (OuterVolumeSpecName: "kube-api-access-4gn5v") pod "ac859130-1b71-4993-ab3d-66600459a32a" (UID: "ac859130-1b71-4993-ab3d-66600459a32a"). InnerVolumeSpecName "kube-api-access-4gn5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.102765 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ac859130-1b71-4993-ab3d-66600459a32a" (UID: "ac859130-1b71-4993-ab3d-66600459a32a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.185367 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.185402 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4gn5v\" (UniqueName: \"kubernetes.io/projected/ac859130-1b71-4993-ab3d-66600459a32a-kube-api-access-4gn5v\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.185424 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ac859130-1b71-4993-ab3d-66600459a32a-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.538070 4804 generic.go:334] "Generic (PLEG): container finished" podID="ac859130-1b71-4993-ab3d-66600459a32a" containerID="9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104" exitCode=0 Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.538124 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4842n" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.538139 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4842n" event={"ID":"ac859130-1b71-4993-ab3d-66600459a32a","Type":"ContainerDied","Data":"9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104"} Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.538547 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4842n" event={"ID":"ac859130-1b71-4993-ab3d-66600459a32a","Type":"ContainerDied","Data":"035e3af2de47d3eb7c5ac704bd99c258ff7c2e427ae366507d1020ab4549c195"} Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.538570 4804 scope.go:117] "RemoveContainer" containerID="9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.557107 4804 scope.go:117] "RemoveContainer" containerID="f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.565068 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-4842n"] Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.567145 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-4842n"] Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.583159 4804 scope.go:117] "RemoveContainer" containerID="f56a23acdab2c28752aaf6e4dc9073f753adc48a6322ba76e58fc61f6bfbdc2f" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.600559 4804 scope.go:117] "RemoveContainer" containerID="9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104" Jan 28 11:25:49 crc kubenswrapper[4804]: E0128 11:25:49.601009 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104\": container with ID starting with 9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104 not found: ID does not exist" containerID="9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.601076 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104"} err="failed to get container status \"9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104\": rpc error: code = NotFound desc = could not find container \"9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104\": container with ID starting with 9b14924f75f64740e36758f0a4903ca324954994e5cc73945cbbd46a65d30104 not found: ID does not exist" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.601111 4804 scope.go:117] "RemoveContainer" containerID="f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2" Jan 28 11:25:49 crc kubenswrapper[4804]: E0128 11:25:49.601521 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2\": container with ID starting with f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2 not found: ID does not exist" containerID="f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.601553 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2"} err="failed to get container status \"f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2\": rpc error: code = NotFound desc = could not find container \"f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2\": container with ID starting with f19336b54ebd6d41771cb9ba702328fef19caa4bcc268b7e327701d24d1943f2 not found: ID does not exist" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.601570 4804 scope.go:117] "RemoveContainer" containerID="f56a23acdab2c28752aaf6e4dc9073f753adc48a6322ba76e58fc61f6bfbdc2f" Jan 28 11:25:49 crc kubenswrapper[4804]: E0128 11:25:49.601859 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f56a23acdab2c28752aaf6e4dc9073f753adc48a6322ba76e58fc61f6bfbdc2f\": container with ID starting with f56a23acdab2c28752aaf6e4dc9073f753adc48a6322ba76e58fc61f6bfbdc2f not found: ID does not exist" containerID="f56a23acdab2c28752aaf6e4dc9073f753adc48a6322ba76e58fc61f6bfbdc2f" Jan 28 11:25:49 crc kubenswrapper[4804]: I0128 11:25:49.601973 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f56a23acdab2c28752aaf6e4dc9073f753adc48a6322ba76e58fc61f6bfbdc2f"} err="failed to get container status \"f56a23acdab2c28752aaf6e4dc9073f753adc48a6322ba76e58fc61f6bfbdc2f\": rpc error: code = NotFound desc = could not find container \"f56a23acdab2c28752aaf6e4dc9073f753adc48a6322ba76e58fc61f6bfbdc2f\": container with ID starting with f56a23acdab2c28752aaf6e4dc9073f753adc48a6322ba76e58fc61f6bfbdc2f not found: ID does not exist" Jan 28 11:25:50 crc kubenswrapper[4804]: I0128 11:25:50.305715 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56fdbb7f67-z2wch"] Jan 28 11:25:50 crc kubenswrapper[4804]: I0128 11:25:50.305959 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" podUID="2a87ee42-201c-4cf3-be06-cfa73ce8c3f9" containerName="controller-manager" containerID="cri-o://dcf37b6c911b3e514bb3b3ee27eb41bb185440fd1374b3d272b1617c0225492f" gracePeriod=30 Jan 28 11:25:50 crc kubenswrapper[4804]: I0128 11:25:50.411058 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl"] Jan 28 11:25:50 crc kubenswrapper[4804]: I0128 11:25:50.411373 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" podUID="779944ca-d8be-40c0-89ac-1e1b3208eed2" containerName="route-controller-manager" containerID="cri-o://9fdf0cdcfdf78c20f986e02639f79a0a492e63a0752874cb68d5057b00ae4186" gracePeriod=30 Jan 28 11:25:50 crc kubenswrapper[4804]: I0128 11:25:50.547488 4804 generic.go:334] "Generic (PLEG): container finished" podID="779944ca-d8be-40c0-89ac-1e1b3208eed2" containerID="9fdf0cdcfdf78c20f986e02639f79a0a492e63a0752874cb68d5057b00ae4186" exitCode=0 Jan 28 11:25:50 crc kubenswrapper[4804]: I0128 11:25:50.547578 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" event={"ID":"779944ca-d8be-40c0-89ac-1e1b3208eed2","Type":"ContainerDied","Data":"9fdf0cdcfdf78c20f986e02639f79a0a492e63a0752874cb68d5057b00ae4186"} Jan 28 11:25:50 crc kubenswrapper[4804]: I0128 11:25:50.549469 4804 generic.go:334] "Generic (PLEG): container finished" podID="2a87ee42-201c-4cf3-be06-cfa73ce8c3f9" containerID="dcf37b6c911b3e514bb3b3ee27eb41bb185440fd1374b3d272b1617c0225492f" exitCode=0 Jan 28 11:25:50 crc kubenswrapper[4804]: I0128 11:25:50.549525 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" event={"ID":"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9","Type":"ContainerDied","Data":"dcf37b6c911b3e514bb3b3ee27eb41bb185440fd1374b3d272b1617c0225492f"} Jan 28 11:25:50 crc kubenswrapper[4804]: I0128 11:25:50.930591 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac859130-1b71-4993-ab3d-66600459a32a" path="/var/lib/kubelet/pods/ac859130-1b71-4993-ab3d-66600459a32a/volumes" Jan 28 11:25:50 crc kubenswrapper[4804]: I0128 11:25:50.942971 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.013117 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.106302 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-client-ca\") pod \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.106394 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrp5k\" (UniqueName: \"kubernetes.io/projected/779944ca-d8be-40c0-89ac-1e1b3208eed2-kube-api-access-nrp5k\") pod \"779944ca-d8be-40c0-89ac-1e1b3208eed2\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.106427 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-client-ca\") pod \"779944ca-d8be-40c0-89ac-1e1b3208eed2\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.106479 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-proxy-ca-bundles\") pod \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.106504 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/779944ca-d8be-40c0-89ac-1e1b3208eed2-serving-cert\") pod \"779944ca-d8be-40c0-89ac-1e1b3208eed2\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.106548 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-config\") pod \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.106602 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-config\") pod \"779944ca-d8be-40c0-89ac-1e1b3208eed2\" (UID: \"779944ca-d8be-40c0-89ac-1e1b3208eed2\") " Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.106688 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-serving-cert\") pod \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.106707 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhhvx\" (UniqueName: \"kubernetes.io/projected/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-kube-api-access-jhhvx\") pod \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\" (UID: \"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9\") " Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.108181 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-config" (OuterVolumeSpecName: "config") pod "779944ca-d8be-40c0-89ac-1e1b3208eed2" (UID: "779944ca-d8be-40c0-89ac-1e1b3208eed2"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.108247 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-client-ca" (OuterVolumeSpecName: "client-ca") pod "779944ca-d8be-40c0-89ac-1e1b3208eed2" (UID: "779944ca-d8be-40c0-89ac-1e1b3208eed2"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.108523 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "2a87ee42-201c-4cf3-be06-cfa73ce8c3f9" (UID: "2a87ee42-201c-4cf3-be06-cfa73ce8c3f9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.107741 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-client-ca" (OuterVolumeSpecName: "client-ca") pod "2a87ee42-201c-4cf3-be06-cfa73ce8c3f9" (UID: "2a87ee42-201c-4cf3-be06-cfa73ce8c3f9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.108655 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-config" (OuterVolumeSpecName: "config") pod "2a87ee42-201c-4cf3-be06-cfa73ce8c3f9" (UID: "2a87ee42-201c-4cf3-be06-cfa73ce8c3f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.111656 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2a87ee42-201c-4cf3-be06-cfa73ce8c3f9" (UID: "2a87ee42-201c-4cf3-be06-cfa73ce8c3f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.112195 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-kube-api-access-jhhvx" (OuterVolumeSpecName: "kube-api-access-jhhvx") pod "2a87ee42-201c-4cf3-be06-cfa73ce8c3f9" (UID: "2a87ee42-201c-4cf3-be06-cfa73ce8c3f9"). InnerVolumeSpecName "kube-api-access-jhhvx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.111689 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/779944ca-d8be-40c0-89ac-1e1b3208eed2-kube-api-access-nrp5k" (OuterVolumeSpecName: "kube-api-access-nrp5k") pod "779944ca-d8be-40c0-89ac-1e1b3208eed2" (UID: "779944ca-d8be-40c0-89ac-1e1b3208eed2"). InnerVolumeSpecName "kube-api-access-nrp5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.111791 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/779944ca-d8be-40c0-89ac-1e1b3208eed2-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "779944ca-d8be-40c0-89ac-1e1b3208eed2" (UID: "779944ca-d8be-40c0-89ac-1e1b3208eed2"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.208538 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrp5k\" (UniqueName: \"kubernetes.io/projected/779944ca-d8be-40c0-89ac-1e1b3208eed2-kube-api-access-nrp5k\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.208839 4804 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.208945 4804 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.209024 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/779944ca-d8be-40c0-89ac-1e1b3208eed2-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.209090 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.209147 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/779944ca-d8be-40c0-89ac-1e1b3208eed2-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.209220 4804 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.209277 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhhvx\" (UniqueName: \"kubernetes.io/projected/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-kube-api-access-jhhvx\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.209331 4804 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.558532 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" event={"ID":"779944ca-d8be-40c0-89ac-1e1b3208eed2","Type":"ContainerDied","Data":"293582aed8c33e749713ca1eaf41ccd4f918856799864bbaa15b9045968b7da1"} Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.559197 4804 scope.go:117] "RemoveContainer" containerID="9fdf0cdcfdf78c20f986e02639f79a0a492e63a0752874cb68d5057b00ae4186" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.558582 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.560054 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" event={"ID":"2a87ee42-201c-4cf3-be06-cfa73ce8c3f9","Type":"ContainerDied","Data":"ea883bc2d51aafd97d5cd59b8bd8970b0e6abb434f296bbe9aa56e76957157f5"} Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.560120 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-56fdbb7f67-z2wch" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.581393 4804 scope.go:117] "RemoveContainer" containerID="dcf37b6c911b3e514bb3b3ee27eb41bb185440fd1374b3d272b1617c0225492f" Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.606941 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-56fdbb7f67-z2wch"] Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.617367 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-56fdbb7f67-z2wch"] Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.621942 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl"] Jan 28 11:25:51 crc kubenswrapper[4804]: I0128 11:25:51.625541 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7cbb595b88-w8rrl"] Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.335447 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-57b5894978-jsfxt"] Jan 28 11:25:52 crc kubenswrapper[4804]: E0128 11:25:52.335812 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f32834-88e4-454d-81fe-6370a2bc8e0b" containerName="extract-content" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.335836 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f32834-88e4-454d-81fe-6370a2bc8e0b" containerName="extract-content" Jan 28 11:25:52 crc kubenswrapper[4804]: E0128 11:25:52.335850 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" containerName="registry-server" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.335858 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" containerName="registry-server" Jan 28 11:25:52 crc kubenswrapper[4804]: E0128 11:25:52.335874 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f32834-88e4-454d-81fe-6370a2bc8e0b" containerName="registry-server" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.335897 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f32834-88e4-454d-81fe-6370a2bc8e0b" containerName="registry-server" Jan 28 11:25:52 crc kubenswrapper[4804]: E0128 11:25:52.335906 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="23f32834-88e4-454d-81fe-6370a2bc8e0b" containerName="extract-utilities" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.335913 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="23f32834-88e4-454d-81fe-6370a2bc8e0b" containerName="extract-utilities" Jan 28 11:25:52 crc kubenswrapper[4804]: E0128 11:25:52.335924 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a87ee42-201c-4cf3-be06-cfa73ce8c3f9" containerName="controller-manager" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.335932 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a87ee42-201c-4cf3-be06-cfa73ce8c3f9" containerName="controller-manager" Jan 28 11:25:52 crc kubenswrapper[4804]: E0128 11:25:52.335966 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" containerName="extract-content" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.335977 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" containerName="extract-content" Jan 28 11:25:52 crc kubenswrapper[4804]: E0128 11:25:52.335988 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac859130-1b71-4993-ab3d-66600459a32a" containerName="registry-server" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.335995 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac859130-1b71-4993-ab3d-66600459a32a" containerName="registry-server" Jan 28 11:25:52 crc kubenswrapper[4804]: E0128 11:25:52.336008 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac859130-1b71-4993-ab3d-66600459a32a" containerName="extract-content" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.336015 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac859130-1b71-4993-ab3d-66600459a32a" containerName="extract-content" Jan 28 11:25:52 crc kubenswrapper[4804]: E0128 11:25:52.336026 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="779944ca-d8be-40c0-89ac-1e1b3208eed2" containerName="route-controller-manager" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.336033 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="779944ca-d8be-40c0-89ac-1e1b3208eed2" containerName="route-controller-manager" Jan 28 11:25:52 crc kubenswrapper[4804]: E0128 11:25:52.336048 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac859130-1b71-4993-ab3d-66600459a32a" containerName="extract-utilities" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.336055 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac859130-1b71-4993-ab3d-66600459a32a" containerName="extract-utilities" Jan 28 11:25:52 crc kubenswrapper[4804]: E0128 11:25:52.336064 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" containerName="extract-utilities" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.336071 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" containerName="extract-utilities" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.336227 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a87ee42-201c-4cf3-be06-cfa73ce8c3f9" containerName="controller-manager" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.336246 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="23f32834-88e4-454d-81fe-6370a2bc8e0b" containerName="registry-server" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.336257 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="759bdf85-0cca-46db-8126-fab61a8664a8" containerName="registry-server" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.336268 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac859130-1b71-4993-ab3d-66600459a32a" containerName="registry-server" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.336275 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="779944ca-d8be-40c0-89ac-1e1b3208eed2" containerName="route-controller-manager" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.336668 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9"] Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.337286 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.337770 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.340563 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.341675 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.342212 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.342224 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.342293 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.342302 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.342799 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.343095 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.343181 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.343524 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.343724 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.343751 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.356467 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9"] Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.362555 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57b5894978-jsfxt"] Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.364943 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.425783 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be636092-9be6-463c-ae35-758569ce2211-client-ca\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.426125 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqbd7\" (UniqueName: \"kubernetes.io/projected/be636092-9be6-463c-ae35-758569ce2211-kube-api-access-sqbd7\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.426457 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a543af3-067a-4432-8d29-3b98286e3b7f-config\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.426638 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a543af3-067a-4432-8d29-3b98286e3b7f-client-ca\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.426816 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be636092-9be6-463c-ae35-758569ce2211-serving-cert\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.426990 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be636092-9be6-463c-ae35-758569ce2211-config\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.427145 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a543af3-067a-4432-8d29-3b98286e3b7f-serving-cert\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.427304 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be636092-9be6-463c-ae35-758569ce2211-proxy-ca-bundles\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.427519 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kz2pp\" (UniqueName: \"kubernetes.io/projected/6a543af3-067a-4432-8d29-3b98286e3b7f-kube-api-access-kz2pp\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.528109 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be636092-9be6-463c-ae35-758569ce2211-serving-cert\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.528348 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be636092-9be6-463c-ae35-758569ce2211-config\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.528485 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a543af3-067a-4432-8d29-3b98286e3b7f-serving-cert\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.528601 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be636092-9be6-463c-ae35-758569ce2211-proxy-ca-bundles\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.528741 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kz2pp\" (UniqueName: \"kubernetes.io/projected/6a543af3-067a-4432-8d29-3b98286e3b7f-kube-api-access-kz2pp\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.529250 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqbd7\" (UniqueName: \"kubernetes.io/projected/be636092-9be6-463c-ae35-758569ce2211-kube-api-access-sqbd7\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.529962 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be636092-9be6-463c-ae35-758569ce2211-client-ca\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.529993 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a543af3-067a-4432-8d29-3b98286e3b7f-config\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.530076 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a543af3-067a-4432-8d29-3b98286e3b7f-client-ca\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.529846 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be636092-9be6-463c-ae35-758569ce2211-proxy-ca-bundles\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.529833 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be636092-9be6-463c-ae35-758569ce2211-config\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.530577 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/be636092-9be6-463c-ae35-758569ce2211-client-ca\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.532676 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a543af3-067a-4432-8d29-3b98286e3b7f-config\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.532817 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/6a543af3-067a-4432-8d29-3b98286e3b7f-client-ca\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.537988 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a543af3-067a-4432-8d29-3b98286e3b7f-serving-cert\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.538214 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be636092-9be6-463c-ae35-758569ce2211-serving-cert\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.544024 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kz2pp\" (UniqueName: \"kubernetes.io/projected/6a543af3-067a-4432-8d29-3b98286e3b7f-kube-api-access-kz2pp\") pod \"route-controller-manager-59d5fbf555-tz7m9\" (UID: \"6a543af3-067a-4432-8d29-3b98286e3b7f\") " pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.544749 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqbd7\" (UniqueName: \"kubernetes.io/projected/be636092-9be6-463c-ae35-758569ce2211-kube-api-access-sqbd7\") pod \"controller-manager-57b5894978-jsfxt\" (UID: \"be636092-9be6-463c-ae35-758569ce2211\") " pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.682349 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.699991 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.906032 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9"] Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.927116 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a87ee42-201c-4cf3-be06-cfa73ce8c3f9" path="/var/lib/kubelet/pods/2a87ee42-201c-4cf3-be06-cfa73ce8c3f9/volumes" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.929905 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="779944ca-d8be-40c0-89ac-1e1b3208eed2" path="/var/lib/kubelet/pods/779944ca-d8be-40c0-89ac-1e1b3208eed2/volumes" Jan 28 11:25:52 crc kubenswrapper[4804]: I0128 11:25:52.958404 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-57b5894978-jsfxt"] Jan 28 11:25:52 crc kubenswrapper[4804]: W0128 11:25:52.978289 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe636092_9be6_463c_ae35_758569ce2211.slice/crio-05b007a8caebb64998575b0027e380be53d8882d86f6e6d42648be9d2a6cfb3e WatchSource:0}: Error finding container 05b007a8caebb64998575b0027e380be53d8882d86f6e6d42648be9d2a6cfb3e: Status 404 returned error can't find the container with id 05b007a8caebb64998575b0027e380be53d8882d86f6e6d42648be9d2a6cfb3e Jan 28 11:25:53 crc kubenswrapper[4804]: I0128 11:25:53.579293 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" event={"ID":"be636092-9be6-463c-ae35-758569ce2211","Type":"ContainerStarted","Data":"629ea96b15a2c2dabf3f8d6b99390dc1148c11deb1b59f66ded8d6d41e0aa9f5"} Jan 28 11:25:53 crc kubenswrapper[4804]: I0128 11:25:53.581042 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" event={"ID":"be636092-9be6-463c-ae35-758569ce2211","Type":"ContainerStarted","Data":"05b007a8caebb64998575b0027e380be53d8882d86f6e6d42648be9d2a6cfb3e"} Jan 28 11:25:53 crc kubenswrapper[4804]: I0128 11:25:53.581139 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:53 crc kubenswrapper[4804]: I0128 11:25:53.582774 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" event={"ID":"6a543af3-067a-4432-8d29-3b98286e3b7f","Type":"ContainerStarted","Data":"3c687d2da989738286afbb0597757621a25e79ba8c4a925728be40a3100df54d"} Jan 28 11:25:53 crc kubenswrapper[4804]: I0128 11:25:53.582801 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" event={"ID":"6a543af3-067a-4432-8d29-3b98286e3b7f","Type":"ContainerStarted","Data":"1c118399712884cdd129b26e4112bfc42c2aea446d0d9a01af8e8deaec8869c5"} Jan 28 11:25:53 crc kubenswrapper[4804]: I0128 11:25:53.583058 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:53 crc kubenswrapper[4804]: I0128 11:25:53.585701 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" Jan 28 11:25:53 crc kubenswrapper[4804]: I0128 11:25:53.594453 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" Jan 28 11:25:53 crc kubenswrapper[4804]: I0128 11:25:53.607019 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-57b5894978-jsfxt" podStartSLOduration=3.606987826 podStartE2EDuration="3.606987826s" podCreationTimestamp="2026-01-28 11:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:25:53.599161597 +0000 UTC m=+229.394041591" watchObservedRunningTime="2026-01-28 11:25:53.606987826 +0000 UTC m=+229.401867810" Jan 28 11:25:53 crc kubenswrapper[4804]: I0128 11:25:53.626539 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-59d5fbf555-tz7m9" podStartSLOduration=3.626516984 podStartE2EDuration="3.626516984s" podCreationTimestamp="2026-01-28 11:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:25:53.616860774 +0000 UTC m=+229.411740758" watchObservedRunningTime="2026-01-28 11:25:53.626516984 +0000 UTC m=+229.421396968" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.170638 4804 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.173751 4804 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.173789 4804 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.173914 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174274 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e" gracePeriod=15 Jan 28 11:26:00 crc kubenswrapper[4804]: E0128 11:26:00.174361 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174378 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 11:26:00 crc kubenswrapper[4804]: E0128 11:26:00.174390 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174397 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 11:26:00 crc kubenswrapper[4804]: E0128 11:26:00.174409 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174417 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 11:26:00 crc kubenswrapper[4804]: E0128 11:26:00.174434 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174442 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 11:26:00 crc kubenswrapper[4804]: E0128 11:26:00.174454 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174463 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 11:26:00 crc kubenswrapper[4804]: E0128 11:26:00.174476 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174484 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174590 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e" gracePeriod=15 Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174608 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174623 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174641 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174650 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174660 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174692 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051" gracePeriod=15 Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174750 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029" gracePeriod=15 Jan 28 11:26:00 crc kubenswrapper[4804]: E0128 11:26:00.174781 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174792 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174800 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81" gracePeriod=15 Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.174928 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.176852 4804 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.334409 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.334785 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.334815 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.334876 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.334920 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.334941 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.334980 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.335012 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.435872 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.435945 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.435970 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436008 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436022 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436038 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436078 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436116 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436149 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436164 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436187 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436204 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436216 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436227 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436243 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.436264 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.633349 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.635662 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.636564 4804 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e" exitCode=0 Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.636589 4804 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051" exitCode=0 Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.636597 4804 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029" exitCode=0 Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.636605 4804 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81" exitCode=2 Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.636675 4804 scope.go:117] "RemoveContainer" containerID="dedd1b6acd82cf3ae97be7ca1d6c082960a4e01f36f4eecb6b74c672efd0118b" Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.638817 4804 generic.go:334] "Generic (PLEG): container finished" podID="b357b6a6-77f2-483a-8689-9ec35a8d3008" containerID="f527c2fa450cb1d21059874ecde9cc59de23295afb4043919e5157ab805c5185" exitCode=0 Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.638848 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b357b6a6-77f2-483a-8689-9ec35a8d3008","Type":"ContainerDied","Data":"f527c2fa450cb1d21059874ecde9cc59de23295afb4043919e5157ab805c5185"} Jan 28 11:26:00 crc kubenswrapper[4804]: I0128 11:26:00.639626 4804 status_manager.go:851] "Failed to get status for pod" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:01 crc kubenswrapper[4804]: I0128 11:26:01.648911 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 11:26:01 crc kubenswrapper[4804]: I0128 11:26:01.997210 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:26:01 crc kubenswrapper[4804]: I0128 11:26:01.997924 4804 status_manager.go:851] "Failed to get status for pod" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.156124 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-kubelet-dir\") pod \"b357b6a6-77f2-483a-8689-9ec35a8d3008\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.156184 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b357b6a6-77f2-483a-8689-9ec35a8d3008-kube-api-access\") pod \"b357b6a6-77f2-483a-8689-9ec35a8d3008\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.156260 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-var-lock\") pod \"b357b6a6-77f2-483a-8689-9ec35a8d3008\" (UID: \"b357b6a6-77f2-483a-8689-9ec35a8d3008\") " Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.156472 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-var-lock" (OuterVolumeSpecName: "var-lock") pod "b357b6a6-77f2-483a-8689-9ec35a8d3008" (UID: "b357b6a6-77f2-483a-8689-9ec35a8d3008"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.156500 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "b357b6a6-77f2-483a-8689-9ec35a8d3008" (UID: "b357b6a6-77f2-483a-8689-9ec35a8d3008"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.161837 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b357b6a6-77f2-483a-8689-9ec35a8d3008-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "b357b6a6-77f2-483a-8689-9ec35a8d3008" (UID: "b357b6a6-77f2-483a-8689-9ec35a8d3008"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.257606 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b357b6a6-77f2-483a-8689-9ec35a8d3008-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.257918 4804 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.257927 4804 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b357b6a6-77f2-483a-8689-9ec35a8d3008-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.532274 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.533151 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.533588 4804 status_manager.go:851] "Failed to get status for pod" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.533974 4804 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.657152 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.657937 4804 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e" exitCode=0 Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.658007 4804 scope.go:117] "RemoveContainer" containerID="a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.658064 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.659647 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"b357b6a6-77f2-483a-8689-9ec35a8d3008","Type":"ContainerDied","Data":"1f7f9ceafdf7d00d9bfd7448074f1a52a2999efacee1059cdf48132d46ccbaba"} Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.659674 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f7f9ceafdf7d00d9bfd7448074f1a52a2999efacee1059cdf48132d46ccbaba" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.659697 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.662701 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.662779 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.662795 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.662863 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.662871 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.663010 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.663087 4804 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.663104 4804 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.663114 4804 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.672705 4804 status_manager.go:851] "Failed to get status for pod" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.673105 4804 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.674655 4804 scope.go:117] "RemoveContainer" containerID="b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.686825 4804 scope.go:117] "RemoveContainer" containerID="cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.701222 4804 scope.go:117] "RemoveContainer" containerID="82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.714490 4804 scope.go:117] "RemoveContainer" containerID="a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.729335 4804 scope.go:117] "RemoveContainer" containerID="add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.747786 4804 scope.go:117] "RemoveContainer" containerID="a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e" Jan 28 11:26:02 crc kubenswrapper[4804]: E0128 11:26:02.748354 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\": container with ID starting with a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e not found: ID does not exist" containerID="a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.748423 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e"} err="failed to get container status \"a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\": rpc error: code = NotFound desc = could not find container \"a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e\": container with ID starting with a4e7fa146ad33ef72503e0e728028b1b1e116d97538025cf7a8d665706e8050e not found: ID does not exist" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.748486 4804 scope.go:117] "RemoveContainer" containerID="b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051" Jan 28 11:26:02 crc kubenswrapper[4804]: E0128 11:26:02.748903 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\": container with ID starting with b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051 not found: ID does not exist" containerID="b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.748960 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051"} err="failed to get container status \"b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\": rpc error: code = NotFound desc = could not find container \"b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051\": container with ID starting with b2e45a18452fdc3554e0491e056896be7f73f1d566ce08b736521866a7e7c051 not found: ID does not exist" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.748991 4804 scope.go:117] "RemoveContainer" containerID="cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029" Jan 28 11:26:02 crc kubenswrapper[4804]: E0128 11:26:02.749367 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\": container with ID starting with cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029 not found: ID does not exist" containerID="cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.749417 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029"} err="failed to get container status \"cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\": rpc error: code = NotFound desc = could not find container \"cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029\": container with ID starting with cfcfa7db3de27986e238050c52abd1d281868900442a9d88b17d97f1dfa18029 not found: ID does not exist" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.749461 4804 scope.go:117] "RemoveContainer" containerID="82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81" Jan 28 11:26:02 crc kubenswrapper[4804]: E0128 11:26:02.749748 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\": container with ID starting with 82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81 not found: ID does not exist" containerID="82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.749775 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81"} err="failed to get container status \"82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\": rpc error: code = NotFound desc = could not find container \"82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81\": container with ID starting with 82b0256cc807f474c5a7aef0338fc637c11657c3fd506ab6c471da51d5fa4b81 not found: ID does not exist" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.749792 4804 scope.go:117] "RemoveContainer" containerID="a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e" Jan 28 11:26:02 crc kubenswrapper[4804]: E0128 11:26:02.750194 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\": container with ID starting with a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e not found: ID does not exist" containerID="a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.750218 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e"} err="failed to get container status \"a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\": rpc error: code = NotFound desc = could not find container \"a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e\": container with ID starting with a21266e6dfab5eec4f3a2cef890a7beea412b55b4f857799b4d571c6d3991e5e not found: ID does not exist" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.750232 4804 scope.go:117] "RemoveContainer" containerID="add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1" Jan 28 11:26:02 crc kubenswrapper[4804]: E0128 11:26:02.750535 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\": container with ID starting with add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1 not found: ID does not exist" containerID="add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.750559 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1"} err="failed to get container status \"add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\": rpc error: code = NotFound desc = could not find container \"add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1\": container with ID starting with add2e0ec7dc1e98ffa75ceec37baeec600b4e19adea722474c2115bb6fae25b1 not found: ID does not exist" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.920677 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.962424 4804 status_manager.go:851] "Failed to get status for pod" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:02 crc kubenswrapper[4804]: I0128 11:26:02.963153 4804 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:04 crc kubenswrapper[4804]: I0128 11:26:04.918559 4804 status_manager.go:851] "Failed to get status for pod" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:04 crc kubenswrapper[4804]: I0128 11:26:04.919184 4804 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:05 crc kubenswrapper[4804]: E0128 11:26:05.204451 4804 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.27:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:05 crc kubenswrapper[4804]: I0128 11:26:05.205016 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:05 crc kubenswrapper[4804]: W0128 11:26:05.224180 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-be051426154fe61dbd5ae81bb9fe36b129599de48393184ae0a3a18c2effe04c WatchSource:0}: Error finding container be051426154fe61dbd5ae81bb9fe36b129599de48393184ae0a3a18c2effe04c: Status 404 returned error can't find the container with id be051426154fe61dbd5ae81bb9fe36b129599de48393184ae0a3a18c2effe04c Jan 28 11:26:05 crc kubenswrapper[4804]: E0128 11:26:05.230977 4804 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.27:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ee16dc8410f5c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 11:26:05.230575452 +0000 UTC m=+241.025455436,LastTimestamp:2026-01-28 11:26:05.230575452 +0000 UTC m=+241.025455436,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 11:26:05 crc kubenswrapper[4804]: E0128 11:26:05.462067 4804 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:05 crc kubenswrapper[4804]: E0128 11:26:05.462992 4804 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:05 crc kubenswrapper[4804]: E0128 11:26:05.463409 4804 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:05 crc kubenswrapper[4804]: E0128 11:26:05.463758 4804 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:05 crc kubenswrapper[4804]: E0128 11:26:05.464075 4804 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:05 crc kubenswrapper[4804]: I0128 11:26:05.464163 4804 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 28 11:26:05 crc kubenswrapper[4804]: E0128 11:26:05.464484 4804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="200ms" Jan 28 11:26:05 crc kubenswrapper[4804]: E0128 11:26:05.665392 4804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="400ms" Jan 28 11:26:05 crc kubenswrapper[4804]: I0128 11:26:05.676198 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b"} Jan 28 11:26:05 crc kubenswrapper[4804]: I0128 11:26:05.676250 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"be051426154fe61dbd5ae81bb9fe36b129599de48393184ae0a3a18c2effe04c"} Jan 28 11:26:05 crc kubenswrapper[4804]: I0128 11:26:05.677108 4804 status_manager.go:851] "Failed to get status for pod" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:05 crc kubenswrapper[4804]: E0128 11:26:05.677223 4804 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.27:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:06 crc kubenswrapper[4804]: E0128 11:26:06.066405 4804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="800ms" Jan 28 11:26:07 crc kubenswrapper[4804]: E0128 11:26:07.095200 4804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="1.6s" Jan 28 11:26:08 crc kubenswrapper[4804]: E0128 11:26:08.696094 4804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="3.2s" Jan 28 11:26:11 crc kubenswrapper[4804]: E0128 11:26:11.896955 4804 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.27:6443: connect: connection refused" interval="6.4s" Jan 28 11:26:12 crc kubenswrapper[4804]: E0128 11:26:12.759366 4804 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.27:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188ee16dc8410f5c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-28 11:26:05.230575452 +0000 UTC m=+241.025455436,LastTimestamp:2026-01-28 11:26:05.230575452 +0000 UTC m=+241.025455436,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 28 11:26:12 crc kubenswrapper[4804]: I0128 11:26:12.915040 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:12 crc kubenswrapper[4804]: I0128 11:26:12.916952 4804 status_manager.go:851] "Failed to get status for pod" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:12 crc kubenswrapper[4804]: I0128 11:26:12.934614 4804 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3ed363e0-3913-4e5f-93a4-be30983b2c7d" Jan 28 11:26:12 crc kubenswrapper[4804]: I0128 11:26:12.934709 4804 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3ed363e0-3913-4e5f-93a4-be30983b2c7d" Jan 28 11:26:12 crc kubenswrapper[4804]: E0128 11:26:12.935438 4804 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:12 crc kubenswrapper[4804]: I0128 11:26:12.936228 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:13 crc kubenswrapper[4804]: I0128 11:26:13.719943 4804 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="4164ba9c34e39731302feb9e8d26eec3c5c006ee15174972812b45fc1503d60c" exitCode=0 Jan 28 11:26:13 crc kubenswrapper[4804]: I0128 11:26:13.720040 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"4164ba9c34e39731302feb9e8d26eec3c5c006ee15174972812b45fc1503d60c"} Jan 28 11:26:13 crc kubenswrapper[4804]: I0128 11:26:13.720204 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"2f01abdff9752f0be3a033a99749a23ee1d341b7eec6c97b1b4bfc4632ccfd61"} Jan 28 11:26:13 crc kubenswrapper[4804]: I0128 11:26:13.720481 4804 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3ed363e0-3913-4e5f-93a4-be30983b2c7d" Jan 28 11:26:13 crc kubenswrapper[4804]: I0128 11:26:13.720496 4804 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3ed363e0-3913-4e5f-93a4-be30983b2c7d" Jan 28 11:26:13 crc kubenswrapper[4804]: I0128 11:26:13.720928 4804 status_manager.go:851] "Failed to get status for pod" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" Jan 28 11:26:13 crc kubenswrapper[4804]: E0128 11:26:13.720930 4804 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.27:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:14 crc kubenswrapper[4804]: I0128 11:26:14.741592 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 11:26:14 crc kubenswrapper[4804]: I0128 11:26:14.741666 4804 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18" exitCode=1 Jan 28 11:26:14 crc kubenswrapper[4804]: I0128 11:26:14.741787 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18"} Jan 28 11:26:14 crc kubenswrapper[4804]: I0128 11:26:14.742382 4804 scope.go:117] "RemoveContainer" containerID="8949bba59c5b469ac6d5239ecb150492ce1c81df43c311032f4b424bb94c7c18" Jan 28 11:26:14 crc kubenswrapper[4804]: I0128 11:26:14.750524 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"9dcacc22419228f8a6f17e6067b29106abbfd910a105c01de60cb2fb3418d6f4"} Jan 28 11:26:14 crc kubenswrapper[4804]: I0128 11:26:14.750586 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"ef9c5c4c7ab1908f624ea8804a418a1b5ea85b984a502be7f347e7dc5d0b3a76"} Jan 28 11:26:14 crc kubenswrapper[4804]: I0128 11:26:14.750598 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7e9f564a7bae30192accf7302dd9ea1753b812d4118c2d90b0187db04cc2adbd"} Jan 28 11:26:14 crc kubenswrapper[4804]: I0128 11:26:14.750608 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"baaddbcae154f2448017c041d454a4011e6bb1a309c4f9f8577c627358883f20"} Jan 28 11:26:15 crc kubenswrapper[4804]: I0128 11:26:15.781110 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 28 11:26:15 crc kubenswrapper[4804]: I0128 11:26:15.781601 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b97b33b8da7f4ebb0737883285456cbd33eaf784f8224902c085a19924d66810"} Jan 28 11:26:15 crc kubenswrapper[4804]: I0128 11:26:15.786841 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"e2d04acc5aa14fd8cd0658da7e77cfd4aba02dc55df8377e4c641dd8a429330e"} Jan 28 11:26:15 crc kubenswrapper[4804]: I0128 11:26:15.787075 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:15 crc kubenswrapper[4804]: I0128 11:26:15.787257 4804 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3ed363e0-3913-4e5f-93a4-be30983b2c7d" Jan 28 11:26:15 crc kubenswrapper[4804]: I0128 11:26:15.787300 4804 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3ed363e0-3913-4e5f-93a4-be30983b2c7d" Jan 28 11:26:17 crc kubenswrapper[4804]: I0128 11:26:17.937022 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:17 crc kubenswrapper[4804]: I0128 11:26:17.937300 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:17 crc kubenswrapper[4804]: I0128 11:26:17.942406 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:20 crc kubenswrapper[4804]: I0128 11:26:20.796039 4804 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:20 crc kubenswrapper[4804]: I0128 11:26:20.822731 4804 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3ed363e0-3913-4e5f-93a4-be30983b2c7d" Jan 28 11:26:20 crc kubenswrapper[4804]: I0128 11:26:20.822789 4804 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3ed363e0-3913-4e5f-93a4-be30983b2c7d" Jan 28 11:26:20 crc kubenswrapper[4804]: I0128 11:26:20.826970 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:20 crc kubenswrapper[4804]: I0128 11:26:20.829783 4804 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c8734f58-f5ce-4a42-8c7f-0620c5bede02" Jan 28 11:26:21 crc kubenswrapper[4804]: I0128 11:26:21.322462 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:26:21 crc kubenswrapper[4804]: I0128 11:26:21.326532 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:26:21 crc kubenswrapper[4804]: I0128 11:26:21.827940 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:26:21 crc kubenswrapper[4804]: I0128 11:26:21.828114 4804 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3ed363e0-3913-4e5f-93a4-be30983b2c7d" Jan 28 11:26:21 crc kubenswrapper[4804]: I0128 11:26:21.828140 4804 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3ed363e0-3913-4e5f-93a4-be30983b2c7d" Jan 28 11:26:24 crc kubenswrapper[4804]: I0128 11:26:24.144533 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 28 11:26:24 crc kubenswrapper[4804]: I0128 11:26:24.926298 4804 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="c8734f58-f5ce-4a42-8c7f-0620c5bede02" Jan 28 11:26:30 crc kubenswrapper[4804]: I0128 11:26:30.180745 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 28 11:26:30 crc kubenswrapper[4804]: I0128 11:26:30.406340 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 28 11:26:30 crc kubenswrapper[4804]: I0128 11:26:30.480035 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 28 11:26:30 crc kubenswrapper[4804]: I0128 11:26:30.906418 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 28 11:26:31 crc kubenswrapper[4804]: I0128 11:26:31.150834 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 28 11:26:31 crc kubenswrapper[4804]: I0128 11:26:31.204333 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 28 11:26:31 crc kubenswrapper[4804]: I0128 11:26:31.961643 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 28 11:26:32 crc kubenswrapper[4804]: I0128 11:26:32.163333 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 28 11:26:32 crc kubenswrapper[4804]: I0128 11:26:32.208480 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 28 11:26:32 crc kubenswrapper[4804]: I0128 11:26:32.494505 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.042341 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.114270 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.383084 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.432965 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.496043 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.523794 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.524426 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.580250 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.623680 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.645058 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.743247 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.953155 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 28 11:26:33 crc kubenswrapper[4804]: I0128 11:26:33.992785 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.058235 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.066063 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.207413 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.230143 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.406797 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.407517 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.423267 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.756202 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.808304 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.827316 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.836244 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 28 11:26:34 crc kubenswrapper[4804]: I0128 11:26:34.922520 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.044461 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.286619 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.293343 4804 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.299035 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.299097 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.303386 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.317202 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=15.317188427 podStartE2EDuration="15.317188427s" podCreationTimestamp="2026-01-28 11:26:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:26:35.315185297 +0000 UTC m=+271.110065301" watchObservedRunningTime="2026-01-28 11:26:35.317188427 +0000 UTC m=+271.112068411" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.333467 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.344027 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.388256 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.399912 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.400639 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.406812 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.422744 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.434919 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.449966 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.471590 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.664131 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.754977 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.798782 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.800041 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.918668 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 28 11:26:35 crc kubenswrapper[4804]: I0128 11:26:35.938296 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.063695 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.148563 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.150187 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.190527 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.194403 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.227957 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.245725 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.306378 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.325869 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.359105 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.437167 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.452068 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.658992 4804 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.762678 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.810400 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.835059 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.857900 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 28 11:26:36 crc kubenswrapper[4804]: I0128 11:26:36.927017 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.082922 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.102928 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.104289 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.136570 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.161650 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.250814 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.251626 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.489470 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.512638 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.516499 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.529537 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.557590 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.568135 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.579447 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.636829 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.747421 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.807026 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.812188 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.817024 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.856207 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.864641 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.885325 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.901802 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.919256 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 28 11:26:37 crc kubenswrapper[4804]: I0128 11:26:37.939437 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.002838 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.013092 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.020430 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.078914 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.248772 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.311062 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.391266 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.464945 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.465482 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.478026 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.501049 4804 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.656181 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.658223 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.672803 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.732297 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.751700 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.801231 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.853120 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.875698 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.900956 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 28 11:26:38 crc kubenswrapper[4804]: I0128 11:26:38.974776 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.013548 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.063570 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.101925 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.178929 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.236827 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.258294 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.303956 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.388190 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.401366 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.412309 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.437507 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.530443 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.568738 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.612050 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.741657 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.759294 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.964741 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 28 11:26:39 crc kubenswrapper[4804]: I0128 11:26:39.984545 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.003782 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.059624 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.283563 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.313147 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.446392 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.464027 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.568011 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.576792 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.608409 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.632549 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.633555 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.640762 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.640797 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.690216 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.782450 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.782463 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.841379 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.851608 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.905146 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.911520 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.921941 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.954616 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.972359 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 28 11:26:40 crc kubenswrapper[4804]: I0128 11:26:40.993573 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.003678 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.107618 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.160707 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.181754 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.271400 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.406620 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.439078 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.440496 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.444377 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.450686 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.482052 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.493118 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.647781 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.713334 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.953330 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.953868 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 28 11:26:41 crc kubenswrapper[4804]: I0128 11:26:41.989955 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:41.992770 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.062514 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.081225 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.082436 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.114429 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.145482 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.174089 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.232300 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.378231 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.510641 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.582931 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.604687 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.619165 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.771156 4804 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.865730 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.899934 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.910742 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.948694 4804 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.949041 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b" gracePeriod=5 Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.987677 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 28 11:26:42 crc kubenswrapper[4804]: I0128 11:26:42.992551 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.152638 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.206148 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.239500 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.250679 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.262620 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.331750 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.334099 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.488134 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.501494 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.518015 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.559872 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.641165 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.641545 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.762347 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.837330 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.838240 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.839624 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.972746 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.974009 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 28 11:26:43 crc kubenswrapper[4804]: I0128 11:26:43.981309 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.013270 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.035689 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.100981 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.149463 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.151229 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.262424 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.320288 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.381022 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.398088 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.426034 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.465258 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.652533 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.667753 4804 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.703565 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.836279 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.844031 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.915453 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.928912 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 28 11:26:44 crc kubenswrapper[4804]: I0128 11:26:44.939619 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 28 11:26:45 crc kubenswrapper[4804]: I0128 11:26:45.079400 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 28 11:26:45 crc kubenswrapper[4804]: I0128 11:26:45.079671 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 28 11:26:45 crc kubenswrapper[4804]: I0128 11:26:45.402111 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 28 11:26:45 crc kubenswrapper[4804]: I0128 11:26:45.609596 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 28 11:26:45 crc kubenswrapper[4804]: I0128 11:26:45.626279 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 28 11:26:46 crc kubenswrapper[4804]: I0128 11:26:46.090864 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 28 11:26:46 crc kubenswrapper[4804]: I0128 11:26:46.184337 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 28 11:26:46 crc kubenswrapper[4804]: I0128 11:26:46.397389 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 28 11:26:46 crc kubenswrapper[4804]: I0128 11:26:46.575505 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 28 11:26:46 crc kubenswrapper[4804]: I0128 11:26:46.819219 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.116005 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.118191 4804 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.461442 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.488338 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gw5tb"] Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.489185 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gw5tb" podUID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" containerName="registry-server" containerID="cri-o://fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5" gracePeriod=30 Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.497737 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hzmvb"] Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.498042 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hzmvb" podUID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" containerName="registry-server" containerID="cri-o://42f2eeb16ac98652bf013b7ae171fa09175e007ba579b10ded8267ad8190a2a1" gracePeriod=30 Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.505917 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ml79j"] Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.506124 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" podUID="bb959019-0f9d-4210-8410-6b3c00b02337" containerName="marketplace-operator" containerID="cri-o://4bceb0781d7092ea24802dae15015144fbb316afc1359d2ddad36759cb909c58" gracePeriod=30 Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.517062 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b7c6"] Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.517280 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9b7c6" podUID="6caae643-ab85-4628-bcb1-9c0ecc48c568" containerName="registry-server" containerID="cri-o://3fd567f1f3948b02442e51054aa407e5b7de7526347804594426ab16143ecc19" gracePeriod=30 Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.530178 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jmw4q"] Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.530422 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jmw4q" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" containerName="registry-server" containerID="cri-o://631aa94b77086e59cc4974535410f633d8a570238e4eb3b9012dadf08b6ae781" gracePeriod=30 Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.565544 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s76k6"] Jan 28 11:26:47 crc kubenswrapper[4804]: E0128 11:26:47.566175 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" containerName="installer" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.566201 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" containerName="installer" Jan 28 11:26:47 crc kubenswrapper[4804]: E0128 11:26:47.566229 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.566238 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.566529 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="b357b6a6-77f2-483a-8689-9ec35a8d3008" containerName="installer" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.566565 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.567294 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.593642 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s76k6"] Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.636284 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/349fc9e3-a236-44fd-b7b9-ee08f25c58fd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s76k6\" (UID: \"349fc9e3-a236-44fd-b7b9-ee08f25c58fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.636376 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sbctf\" (UniqueName: \"kubernetes.io/projected/349fc9e3-a236-44fd-b7b9-ee08f25c58fd-kube-api-access-sbctf\") pod \"marketplace-operator-79b997595-s76k6\" (UID: \"349fc9e3-a236-44fd-b7b9-ee08f25c58fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.636446 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/349fc9e3-a236-44fd-b7b9-ee08f25c58fd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s76k6\" (UID: \"349fc9e3-a236-44fd-b7b9-ee08f25c58fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:47 crc kubenswrapper[4804]: E0128 11:26:47.733952 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb641b655_0d3e_4838_8c87_fc72873f1944.slice/crio-conmon-631aa94b77086e59cc4974535410f633d8a570238e4eb3b9012dadf08b6ae781.scope\": RecentStats: unable to find data in memory cache]" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.739434 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/349fc9e3-a236-44fd-b7b9-ee08f25c58fd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s76k6\" (UID: \"349fc9e3-a236-44fd-b7b9-ee08f25c58fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.739494 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/349fc9e3-a236-44fd-b7b9-ee08f25c58fd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s76k6\" (UID: \"349fc9e3-a236-44fd-b7b9-ee08f25c58fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.739555 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sbctf\" (UniqueName: \"kubernetes.io/projected/349fc9e3-a236-44fd-b7b9-ee08f25c58fd-kube-api-access-sbctf\") pod \"marketplace-operator-79b997595-s76k6\" (UID: \"349fc9e3-a236-44fd-b7b9-ee08f25c58fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.741055 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/349fc9e3-a236-44fd-b7b9-ee08f25c58fd-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-s76k6\" (UID: \"349fc9e3-a236-44fd-b7b9-ee08f25c58fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.748693 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/349fc9e3-a236-44fd-b7b9-ee08f25c58fd-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-s76k6\" (UID: \"349fc9e3-a236-44fd-b7b9-ee08f25c58fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.761497 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sbctf\" (UniqueName: \"kubernetes.io/projected/349fc9e3-a236-44fd-b7b9-ee08f25c58fd-kube-api-access-sbctf\") pod \"marketplace-operator-79b997595-s76k6\" (UID: \"349fc9e3-a236-44fd-b7b9-ee08f25c58fd\") " pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.830721 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.933481 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.974066 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.992631 4804 generic.go:334] "Generic (PLEG): container finished" podID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" containerID="fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5" exitCode=0 Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.992689 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw5tb" event={"ID":"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d","Type":"ContainerDied","Data":"fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5"} Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.992715 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gw5tb" event={"ID":"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d","Type":"ContainerDied","Data":"60c5c3bae740bf47c18e8908e6f28f0a1a7fe1ff6bab40703594d2789651297c"} Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.992732 4804 scope.go:117] "RemoveContainer" containerID="fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.992836 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gw5tb" Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.995847 4804 generic.go:334] "Generic (PLEG): container finished" podID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" containerID="42f2eeb16ac98652bf013b7ae171fa09175e007ba579b10ded8267ad8190a2a1" exitCode=0 Jan 28 11:26:47 crc kubenswrapper[4804]: I0128 11:26:47.995904 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzmvb" event={"ID":"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d","Type":"ContainerDied","Data":"42f2eeb16ac98652bf013b7ae171fa09175e007ba579b10ded8267ad8190a2a1"} Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.006630 4804 generic.go:334] "Generic (PLEG): container finished" podID="bb959019-0f9d-4210-8410-6b3c00b02337" containerID="4bceb0781d7092ea24802dae15015144fbb316afc1359d2ddad36759cb909c58" exitCode=0 Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.006707 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" event={"ID":"bb959019-0f9d-4210-8410-6b3c00b02337","Type":"ContainerDied","Data":"4bceb0781d7092ea24802dae15015144fbb316afc1359d2ddad36759cb909c58"} Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.012278 4804 generic.go:334] "Generic (PLEG): container finished" podID="6caae643-ab85-4628-bcb1-9c0ecc48c568" containerID="3fd567f1f3948b02442e51054aa407e5b7de7526347804594426ab16143ecc19" exitCode=0 Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.012358 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b7c6" event={"ID":"6caae643-ab85-4628-bcb1-9c0ecc48c568","Type":"ContainerDied","Data":"3fd567f1f3948b02442e51054aa407e5b7de7526347804594426ab16143ecc19"} Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.016529 4804 generic.go:334] "Generic (PLEG): container finished" podID="b641b655-0d3e-4838-8c87-fc72873f1944" containerID="631aa94b77086e59cc4974535410f633d8a570238e4eb3b9012dadf08b6ae781" exitCode=0 Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.016555 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmw4q" event={"ID":"b641b655-0d3e-4838-8c87-fc72873f1944","Type":"ContainerDied","Data":"631aa94b77086e59cc4974535410f633d8a570238e4eb3b9012dadf08b6ae781"} Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.044154 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-utilities\") pod \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.044228 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-catalog-content\") pod \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.044287 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wms6l\" (UniqueName: \"kubernetes.io/projected/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-kube-api-access-wms6l\") pod \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\" (UID: \"8a0ef2f6-3113-478c-bb8c-9ea8e004a27d\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.045351 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-utilities" (OuterVolumeSpecName: "utilities") pod "8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" (UID: "8a0ef2f6-3113-478c-bb8c-9ea8e004a27d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.050680 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-kube-api-access-wms6l" (OuterVolumeSpecName: "kube-api-access-wms6l") pod "8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" (UID: "8a0ef2f6-3113-478c-bb8c-9ea8e004a27d"). InnerVolumeSpecName "kube-api-access-wms6l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.085479 4804 scope.go:117] "RemoveContainer" containerID="f6b561dfd74bd0608fe5e5715082f1748f705a8d3c70b56213a9e9dd71a73129" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.099382 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.111101 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.113553 4804 scope.go:117] "RemoveContainer" containerID="db5a8c39a47288e2a3d1bd3ec1f9d3852f734582ba3c66bbf5dc81f8dd6799e1" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.116138 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.128858 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" (UID: "8a0ef2f6-3113-478c-bb8c-9ea8e004a27d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.133650 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.150168 4804 scope.go:117] "RemoveContainer" containerID="fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.150536 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-catalog-content\") pod \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " Jan 28 11:26:48 crc kubenswrapper[4804]: E0128 11:26:48.150903 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5\": container with ID starting with fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5 not found: ID does not exist" containerID="fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.150950 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5"} err="failed to get container status \"fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5\": rpc error: code = NotFound desc = could not find container \"fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5\": container with ID starting with fe62312ca30484565e2911e3979dd624984ec6bdab14fbb379bac035e5f41bc5 not found: ID does not exist" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.150986 4804 scope.go:117] "RemoveContainer" containerID="f6b561dfd74bd0608fe5e5715082f1748f705a8d3c70b56213a9e9dd71a73129" Jan 28 11:26:48 crc kubenswrapper[4804]: E0128 11:26:48.154993 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6b561dfd74bd0608fe5e5715082f1748f705a8d3c70b56213a9e9dd71a73129\": container with ID starting with f6b561dfd74bd0608fe5e5715082f1748f705a8d3c70b56213a9e9dd71a73129 not found: ID does not exist" containerID="f6b561dfd74bd0608fe5e5715082f1748f705a8d3c70b56213a9e9dd71a73129" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.155035 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b561dfd74bd0608fe5e5715082f1748f705a8d3c70b56213a9e9dd71a73129"} err="failed to get container status \"f6b561dfd74bd0608fe5e5715082f1748f705a8d3c70b56213a9e9dd71a73129\": rpc error: code = NotFound desc = could not find container \"f6b561dfd74bd0608fe5e5715082f1748f705a8d3c70b56213a9e9dd71a73129\": container with ID starting with f6b561dfd74bd0608fe5e5715082f1748f705a8d3c70b56213a9e9dd71a73129 not found: ID does not exist" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.155060 4804 scope.go:117] "RemoveContainer" containerID="db5a8c39a47288e2a3d1bd3ec1f9d3852f734582ba3c66bbf5dc81f8dd6799e1" Jan 28 11:26:48 crc kubenswrapper[4804]: E0128 11:26:48.155375 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"db5a8c39a47288e2a3d1bd3ec1f9d3852f734582ba3c66bbf5dc81f8dd6799e1\": container with ID starting with db5a8c39a47288e2a3d1bd3ec1f9d3852f734582ba3c66bbf5dc81f8dd6799e1 not found: ID does not exist" containerID="db5a8c39a47288e2a3d1bd3ec1f9d3852f734582ba3c66bbf5dc81f8dd6799e1" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.155406 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"db5a8c39a47288e2a3d1bd3ec1f9d3852f734582ba3c66bbf5dc81f8dd6799e1"} err="failed to get container status \"db5a8c39a47288e2a3d1bd3ec1f9d3852f734582ba3c66bbf5dc81f8dd6799e1\": rpc error: code = NotFound desc = could not find container \"db5a8c39a47288e2a3d1bd3ec1f9d3852f734582ba3c66bbf5dc81f8dd6799e1\": container with ID starting with db5a8c39a47288e2a3d1bd3ec1f9d3852f734582ba3c66bbf5dc81f8dd6799e1 not found: ID does not exist" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.160192 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-trusted-ca\") pod \"bb959019-0f9d-4210-8410-6b3c00b02337\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.160298 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dklnt\" (UniqueName: \"kubernetes.io/projected/b641b655-0d3e-4838-8c87-fc72873f1944-kube-api-access-dklnt\") pod \"b641b655-0d3e-4838-8c87-fc72873f1944\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.160332 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qz4p\" (UniqueName: \"kubernetes.io/projected/6caae643-ab85-4628-bcb1-9c0ecc48c568-kube-api-access-4qz4p\") pod \"6caae643-ab85-4628-bcb1-9c0ecc48c568\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.160371 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-utilities\") pod \"6caae643-ab85-4628-bcb1-9c0ecc48c568\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.160400 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ppxwf\" (UniqueName: \"kubernetes.io/projected/bb959019-0f9d-4210-8410-6b3c00b02337-kube-api-access-ppxwf\") pod \"bb959019-0f9d-4210-8410-6b3c00b02337\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.160428 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-utilities\") pod \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.160466 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-catalog-content\") pod \"b641b655-0d3e-4838-8c87-fc72873f1944\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.160499 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-catalog-content\") pod \"6caae643-ab85-4628-bcb1-9c0ecc48c568\" (UID: \"6caae643-ab85-4628-bcb1-9c0ecc48c568\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.160523 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-operator-metrics\") pod \"bb959019-0f9d-4210-8410-6b3c00b02337\" (UID: \"bb959019-0f9d-4210-8410-6b3c00b02337\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.160576 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdvxr\" (UniqueName: \"kubernetes.io/projected/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-kube-api-access-zdvxr\") pod \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\" (UID: \"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.160603 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-utilities\") pod \"b641b655-0d3e-4838-8c87-fc72873f1944\" (UID: \"b641b655-0d3e-4838-8c87-fc72873f1944\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.161014 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.161037 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.161052 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wms6l\" (UniqueName: \"kubernetes.io/projected/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d-kube-api-access-wms6l\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.164703 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b641b655-0d3e-4838-8c87-fc72873f1944-kube-api-access-dklnt" (OuterVolumeSpecName: "kube-api-access-dklnt") pod "b641b655-0d3e-4838-8c87-fc72873f1944" (UID: "b641b655-0d3e-4838-8c87-fc72873f1944"). InnerVolumeSpecName "kube-api-access-dklnt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.165002 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-utilities" (OuterVolumeSpecName: "utilities") pod "6caae643-ab85-4628-bcb1-9c0ecc48c568" (UID: "6caae643-ab85-4628-bcb1-9c0ecc48c568"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.165631 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-utilities" (OuterVolumeSpecName: "utilities") pod "3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" (UID: "3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.166346 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "bb959019-0f9d-4210-8410-6b3c00b02337" (UID: "bb959019-0f9d-4210-8410-6b3c00b02337"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.166968 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-utilities" (OuterVolumeSpecName: "utilities") pod "b641b655-0d3e-4838-8c87-fc72873f1944" (UID: "b641b655-0d3e-4838-8c87-fc72873f1944"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.168134 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6caae643-ab85-4628-bcb1-9c0ecc48c568-kube-api-access-4qz4p" (OuterVolumeSpecName: "kube-api-access-4qz4p") pod "6caae643-ab85-4628-bcb1-9c0ecc48c568" (UID: "6caae643-ab85-4628-bcb1-9c0ecc48c568"). InnerVolumeSpecName "kube-api-access-4qz4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.169725 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-kube-api-access-zdvxr" (OuterVolumeSpecName: "kube-api-access-zdvxr") pod "3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" (UID: "3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d"). InnerVolumeSpecName "kube-api-access-zdvxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.169767 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb959019-0f9d-4210-8410-6b3c00b02337-kube-api-access-ppxwf" (OuterVolumeSpecName: "kube-api-access-ppxwf") pod "bb959019-0f9d-4210-8410-6b3c00b02337" (UID: "bb959019-0f9d-4210-8410-6b3c00b02337"). InnerVolumeSpecName "kube-api-access-ppxwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.170400 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "bb959019-0f9d-4210-8410-6b3c00b02337" (UID: "bb959019-0f9d-4210-8410-6b3c00b02337"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.202648 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6caae643-ab85-4628-bcb1-9c0ecc48c568" (UID: "6caae643-ab85-4628-bcb1-9c0ecc48c568"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.209532 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" (UID: "3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.265008 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dklnt\" (UniqueName: \"kubernetes.io/projected/b641b655-0d3e-4838-8c87-fc72873f1944-kube-api-access-dklnt\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.265041 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qz4p\" (UniqueName: \"kubernetes.io/projected/6caae643-ab85-4628-bcb1-9c0ecc48c568-kube-api-access-4qz4p\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.265052 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.265062 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ppxwf\" (UniqueName: \"kubernetes.io/projected/bb959019-0f9d-4210-8410-6b3c00b02337-kube-api-access-ppxwf\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.265071 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.265079 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6caae643-ab85-4628-bcb1-9c0ecc48c568-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.265087 4804 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.265097 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdvxr\" (UniqueName: \"kubernetes.io/projected/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-kube-api-access-zdvxr\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.265104 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.265112 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.265119 4804 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bb959019-0f9d-4210-8410-6b3c00b02337-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.294422 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b641b655-0d3e-4838-8c87-fc72873f1944" (UID: "b641b655-0d3e-4838-8c87-fc72873f1944"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.321412 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gw5tb"] Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.324248 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gw5tb"] Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.363077 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.366793 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b641b655-0d3e-4838-8c87-fc72873f1944-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.370392 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-s76k6"] Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.473826 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.520422 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.521188 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.569462 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.569546 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.569616 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.569642 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.569654 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.569746 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.569794 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.569849 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.569915 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.570181 4804 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.570203 4804 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.570214 4804 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.570229 4804 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.575711 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.671652 4804 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.922440 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" path="/var/lib/kubelet/pods/8a0ef2f6-3113-478c-bb8c-9ea8e004a27d/volumes" Jan 28 11:26:48 crc kubenswrapper[4804]: I0128 11:26:48.923595 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.022620 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.022649 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-ml79j" event={"ID":"bb959019-0f9d-4210-8410-6b3c00b02337","Type":"ContainerDied","Data":"da180074ac3e1b702af197f95701d1cff294f3e8895503fdbfbde3d61d0ef87e"} Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.022999 4804 scope.go:117] "RemoveContainer" containerID="4bceb0781d7092ea24802dae15015144fbb316afc1359d2ddad36759cb909c58" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.025090 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" event={"ID":"349fc9e3-a236-44fd-b7b9-ee08f25c58fd","Type":"ContainerStarted","Data":"9da03e5fdc5f3c0c17b1a579763363b0e575c125e01822f30376b68abbdbe2c9"} Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.025133 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" event={"ID":"349fc9e3-a236-44fd-b7b9-ee08f25c58fd","Type":"ContainerStarted","Data":"fd05b2d05ff2ac6e274fd94eb02e4e64d7931052ca41aa3d8272968ecffe0ef4"} Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.027237 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9b7c6" event={"ID":"6caae643-ab85-4628-bcb1-9c0ecc48c568","Type":"ContainerDied","Data":"1e8bd873fb7adcc76814d0eeeb9b78c6d6981cbd42db2825d7cfc8757dac3b5e"} Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.027394 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9b7c6" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.036509 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jmw4q" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.037939 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jmw4q" event={"ID":"b641b655-0d3e-4838-8c87-fc72873f1944","Type":"ContainerDied","Data":"b9f8fd7843e0d657401a449864e7360a08eaacd9d3a996600b88abc62b6de5e9"} Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.042237 4804 scope.go:117] "RemoveContainer" containerID="3fd567f1f3948b02442e51054aa407e5b7de7526347804594426ab16143ecc19" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.045002 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hzmvb" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.045297 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hzmvb" event={"ID":"3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d","Type":"ContainerDied","Data":"ec04856dfe2459bdae75866159a6a9081b3f707d9e9a839eb94cb2acf0e4e3d1"} Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.046233 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" podStartSLOduration=2.046215579 podStartE2EDuration="2.046215579s" podCreationTimestamp="2026-01-28 11:26:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:26:49.045466467 +0000 UTC m=+284.840346461" watchObservedRunningTime="2026-01-28 11:26:49.046215579 +0000 UTC m=+284.841095573" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.048893 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.048961 4804 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b" exitCode=137 Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.049533 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.071248 4804 scope.go:117] "RemoveContainer" containerID="d94669774e7242d8b7fe429cfa0919b0f629e2465d0eed385a4b1380750d4b02" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.074939 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ml79j"] Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.086445 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-ml79j"] Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.091148 4804 scope.go:117] "RemoveContainer" containerID="04c43db3e70bb20141e7892290639067d3851e183e916843eb2d0aab2b130c9a" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.094068 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hzmvb"] Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.101922 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hzmvb"] Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.106539 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b7c6"] Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.109984 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9b7c6"] Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.116741 4804 scope.go:117] "RemoveContainer" containerID="631aa94b77086e59cc4974535410f633d8a570238e4eb3b9012dadf08b6ae781" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.117515 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jmw4q"] Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.121453 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jmw4q"] Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.128641 4804 scope.go:117] "RemoveContainer" containerID="7725654f9e2f3db24252d95301f4512ca56872a844c3f809462e7438542a69f4" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.142060 4804 scope.go:117] "RemoveContainer" containerID="38d5811043b3f5ad798e66586c4ba52ca430539e3b5096297f2d0e1b1b72ab80" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.168279 4804 scope.go:117] "RemoveContainer" containerID="42f2eeb16ac98652bf013b7ae171fa09175e007ba579b10ded8267ad8190a2a1" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.181349 4804 scope.go:117] "RemoveContainer" containerID="ec196e8414d1104384ba418ed46e3931a8aa99482add9614aaedd0533c6a0b63" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.197315 4804 scope.go:117] "RemoveContainer" containerID="224ba74fdc92a764e31b68f322cd68766ad88b0938c015d6c3219ec78f441a34" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.211007 4804 scope.go:117] "RemoveContainer" containerID="a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.229873 4804 scope.go:117] "RemoveContainer" containerID="a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b" Jan 28 11:26:49 crc kubenswrapper[4804]: E0128 11:26:49.230367 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b\": container with ID starting with a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b not found: ID does not exist" containerID="a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.230398 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b"} err="failed to get container status \"a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b\": rpc error: code = NotFound desc = could not find container \"a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b\": container with ID starting with a4e563f0c02cd9e634c2b04f733e1780a16001c51b8e76c643f4d8ba85ab5c0b not found: ID does not exist" Jan 28 11:26:49 crc kubenswrapper[4804]: I0128 11:26:49.809340 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 28 11:26:50 crc kubenswrapper[4804]: I0128 11:26:50.064557 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:50 crc kubenswrapper[4804]: I0128 11:26:50.074779 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-s76k6" Jan 28 11:26:50 crc kubenswrapper[4804]: I0128 11:26:50.921974 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" path="/var/lib/kubelet/pods/3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d/volumes" Jan 28 11:26:50 crc kubenswrapper[4804]: I0128 11:26:50.923499 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6caae643-ab85-4628-bcb1-9c0ecc48c568" path="/var/lib/kubelet/pods/6caae643-ab85-4628-bcb1-9c0ecc48c568/volumes" Jan 28 11:26:50 crc kubenswrapper[4804]: I0128 11:26:50.924138 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" path="/var/lib/kubelet/pods/b641b655-0d3e-4838-8c87-fc72873f1944/volumes" Jan 28 11:26:50 crc kubenswrapper[4804]: I0128 11:26:50.925304 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb959019-0f9d-4210-8410-6b3c00b02337" path="/var/lib/kubelet/pods/bb959019-0f9d-4210-8410-6b3c00b02337/volumes" Jan 28 11:27:04 crc kubenswrapper[4804]: I0128 11:27:04.743733 4804 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.487784 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wbxgh"] Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488519 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488534 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488548 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6caae643-ab85-4628-bcb1-9c0ecc48c568" containerName="extract-utilities" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488556 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="6caae643-ab85-4628-bcb1-9c0ecc48c568" containerName="extract-utilities" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488566 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" containerName="extract-content" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488573 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" containerName="extract-content" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488584 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" containerName="extract-content" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488591 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" containerName="extract-content" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488603 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" containerName="extract-utilities" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488610 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" containerName="extract-utilities" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488621 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" containerName="extract-content" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488628 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" containerName="extract-content" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488644 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" containerName="extract-utilities" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488650 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" containerName="extract-utilities" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488660 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6caae643-ab85-4628-bcb1-9c0ecc48c568" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488666 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="6caae643-ab85-4628-bcb1-9c0ecc48c568" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488675 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb959019-0f9d-4210-8410-6b3c00b02337" containerName="marketplace-operator" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488682 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb959019-0f9d-4210-8410-6b3c00b02337" containerName="marketplace-operator" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488690 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488696 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488705 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" containerName="extract-utilities" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488710 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" containerName="extract-utilities" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488717 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488723 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: E0128 11:27:35.488731 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6caae643-ab85-4628-bcb1-9c0ecc48c568" containerName="extract-content" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488737 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="6caae643-ab85-4628-bcb1-9c0ecc48c568" containerName="extract-content" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488823 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb959019-0f9d-4210-8410-6b3c00b02337" containerName="marketplace-operator" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488834 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="6caae643-ab85-4628-bcb1-9c0ecc48c568" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488842 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="b641b655-0d3e-4838-8c87-fc72873f1944" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488850 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="3a8d8bca-1ae3-44d1-9793-29fc2a2f5e8d" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.488861 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="8a0ef2f6-3113-478c-bb8c-9ea8e004a27d" containerName="registry-server" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.489547 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.491885 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.513522 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wbxgh"] Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.589840 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91e77bd7-6a7b-4b91-b47d-61e61d157acb-utilities\") pod \"community-operators-wbxgh\" (UID: \"91e77bd7-6a7b-4b91-b47d-61e61d157acb\") " pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.590024 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91e77bd7-6a7b-4b91-b47d-61e61d157acb-catalog-content\") pod \"community-operators-wbxgh\" (UID: \"91e77bd7-6a7b-4b91-b47d-61e61d157acb\") " pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.590085 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9694\" (UniqueName: \"kubernetes.io/projected/91e77bd7-6a7b-4b91-b47d-61e61d157acb-kube-api-access-d9694\") pod \"community-operators-wbxgh\" (UID: \"91e77bd7-6a7b-4b91-b47d-61e61d157acb\") " pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.691401 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91e77bd7-6a7b-4b91-b47d-61e61d157acb-utilities\") pod \"community-operators-wbxgh\" (UID: \"91e77bd7-6a7b-4b91-b47d-61e61d157acb\") " pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.691476 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91e77bd7-6a7b-4b91-b47d-61e61d157acb-catalog-content\") pod \"community-operators-wbxgh\" (UID: \"91e77bd7-6a7b-4b91-b47d-61e61d157acb\") " pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.691531 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9694\" (UniqueName: \"kubernetes.io/projected/91e77bd7-6a7b-4b91-b47d-61e61d157acb-kube-api-access-d9694\") pod \"community-operators-wbxgh\" (UID: \"91e77bd7-6a7b-4b91-b47d-61e61d157acb\") " pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.691915 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mfzfl"] Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.691993 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/91e77bd7-6a7b-4b91-b47d-61e61d157acb-utilities\") pod \"community-operators-wbxgh\" (UID: \"91e77bd7-6a7b-4b91-b47d-61e61d157acb\") " pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.692055 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/91e77bd7-6a7b-4b91-b47d-61e61d157acb-catalog-content\") pod \"community-operators-wbxgh\" (UID: \"91e77bd7-6a7b-4b91-b47d-61e61d157acb\") " pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.708727 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.710978 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfzfl"] Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.712370 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.721977 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9694\" (UniqueName: \"kubernetes.io/projected/91e77bd7-6a7b-4b91-b47d-61e61d157acb-kube-api-access-d9694\") pod \"community-operators-wbxgh\" (UID: \"91e77bd7-6a7b-4b91-b47d-61e61d157acb\") " pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.823437 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.894175 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e326a9c-bf0f-4d43-87f0-f4c4e2667118-utilities\") pod \"redhat-marketplace-mfzfl\" (UID: \"7e326a9c-bf0f-4d43-87f0-f4c4e2667118\") " pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.894373 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e326a9c-bf0f-4d43-87f0-f4c4e2667118-catalog-content\") pod \"redhat-marketplace-mfzfl\" (UID: \"7e326a9c-bf0f-4d43-87f0-f4c4e2667118\") " pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.894421 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptgdf\" (UniqueName: \"kubernetes.io/projected/7e326a9c-bf0f-4d43-87f0-f4c4e2667118-kube-api-access-ptgdf\") pod \"redhat-marketplace-mfzfl\" (UID: \"7e326a9c-bf0f-4d43-87f0-f4c4e2667118\") " pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.995449 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptgdf\" (UniqueName: \"kubernetes.io/projected/7e326a9c-bf0f-4d43-87f0-f4c4e2667118-kube-api-access-ptgdf\") pod \"redhat-marketplace-mfzfl\" (UID: \"7e326a9c-bf0f-4d43-87f0-f4c4e2667118\") " pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.995527 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e326a9c-bf0f-4d43-87f0-f4c4e2667118-utilities\") pod \"redhat-marketplace-mfzfl\" (UID: \"7e326a9c-bf0f-4d43-87f0-f4c4e2667118\") " pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.995573 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e326a9c-bf0f-4d43-87f0-f4c4e2667118-catalog-content\") pod \"redhat-marketplace-mfzfl\" (UID: \"7e326a9c-bf0f-4d43-87f0-f4c4e2667118\") " pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.996166 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7e326a9c-bf0f-4d43-87f0-f4c4e2667118-catalog-content\") pod \"redhat-marketplace-mfzfl\" (UID: \"7e326a9c-bf0f-4d43-87f0-f4c4e2667118\") " pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:35 crc kubenswrapper[4804]: I0128 11:27:35.996399 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7e326a9c-bf0f-4d43-87f0-f4c4e2667118-utilities\") pod \"redhat-marketplace-mfzfl\" (UID: \"7e326a9c-bf0f-4d43-87f0-f4c4e2667118\") " pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:36 crc kubenswrapper[4804]: I0128 11:27:36.013463 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptgdf\" (UniqueName: \"kubernetes.io/projected/7e326a9c-bf0f-4d43-87f0-f4c4e2667118-kube-api-access-ptgdf\") pod \"redhat-marketplace-mfzfl\" (UID: \"7e326a9c-bf0f-4d43-87f0-f4c4e2667118\") " pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:36 crc kubenswrapper[4804]: I0128 11:27:36.037986 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:36 crc kubenswrapper[4804]: I0128 11:27:36.194506 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wbxgh"] Jan 28 11:27:36 crc kubenswrapper[4804]: I0128 11:27:36.364685 4804 generic.go:334] "Generic (PLEG): container finished" podID="91e77bd7-6a7b-4b91-b47d-61e61d157acb" containerID="f7bfcb1fa1ea45b816b10d95c5b6718c2ba8bd93e908b6478ec77a57e3d240ab" exitCode=0 Jan 28 11:27:36 crc kubenswrapper[4804]: I0128 11:27:36.364723 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wbxgh" event={"ID":"91e77bd7-6a7b-4b91-b47d-61e61d157acb","Type":"ContainerDied","Data":"f7bfcb1fa1ea45b816b10d95c5b6718c2ba8bd93e908b6478ec77a57e3d240ab"} Jan 28 11:27:36 crc kubenswrapper[4804]: I0128 11:27:36.364744 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wbxgh" event={"ID":"91e77bd7-6a7b-4b91-b47d-61e61d157acb","Type":"ContainerStarted","Data":"08a30b0f1eb7c69a3bedcee0c785b4f32de906b4ffb7be33be7d3fdf850fe06c"} Jan 28 11:27:36 crc kubenswrapper[4804]: I0128 11:27:36.407857 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mfzfl"] Jan 28 11:27:36 crc kubenswrapper[4804]: W0128 11:27:36.412858 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7e326a9c_bf0f_4d43_87f0_f4c4e2667118.slice/crio-455712f693f026e48772fb731f0096491a8bcf7e749dbadb8c84b6d1f7d299c1 WatchSource:0}: Error finding container 455712f693f026e48772fb731f0096491a8bcf7e749dbadb8c84b6d1f7d299c1: Status 404 returned error can't find the container with id 455712f693f026e48772fb731f0096491a8bcf7e749dbadb8c84b6d1f7d299c1 Jan 28 11:27:37 crc kubenswrapper[4804]: I0128 11:27:37.370786 4804 generic.go:334] "Generic (PLEG): container finished" podID="7e326a9c-bf0f-4d43-87f0-f4c4e2667118" containerID="609e18455ed5ea2a438ea430e22a3ace680e973fb7aec4a152150642a40ad467" exitCode=0 Jan 28 11:27:37 crc kubenswrapper[4804]: I0128 11:27:37.370874 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfzfl" event={"ID":"7e326a9c-bf0f-4d43-87f0-f4c4e2667118","Type":"ContainerDied","Data":"609e18455ed5ea2a438ea430e22a3ace680e973fb7aec4a152150642a40ad467"} Jan 28 11:27:37 crc kubenswrapper[4804]: I0128 11:27:37.371178 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfzfl" event={"ID":"7e326a9c-bf0f-4d43-87f0-f4c4e2667118","Type":"ContainerStarted","Data":"455712f693f026e48772fb731f0096491a8bcf7e749dbadb8c84b6d1f7d299c1"} Jan 28 11:27:37 crc kubenswrapper[4804]: I0128 11:27:37.373689 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wbxgh" event={"ID":"91e77bd7-6a7b-4b91-b47d-61e61d157acb","Type":"ContainerStarted","Data":"ab1ed78c1d05a6ec47e45c17d34c73abafd388ef6ca139e5f120baefc9ffeb59"} Jan 28 11:27:37 crc kubenswrapper[4804]: I0128 11:27:37.881783 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hfp4x"] Jan 28 11:27:37 crc kubenswrapper[4804]: I0128 11:27:37.882735 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:37 crc kubenswrapper[4804]: I0128 11:27:37.884399 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 28 11:27:37 crc kubenswrapper[4804]: I0128 11:27:37.892695 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hfp4x"] Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.040039 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9smm\" (UniqueName: \"kubernetes.io/projected/64d5e8a4-00e0-4aae-988b-d10e5f36cae7-kube-api-access-q9smm\") pod \"redhat-operators-hfp4x\" (UID: \"64d5e8a4-00e0-4aae-988b-d10e5f36cae7\") " pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.040157 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d5e8a4-00e0-4aae-988b-d10e5f36cae7-catalog-content\") pod \"redhat-operators-hfp4x\" (UID: \"64d5e8a4-00e0-4aae-988b-d10e5f36cae7\") " pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.040198 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d5e8a4-00e0-4aae-988b-d10e5f36cae7-utilities\") pod \"redhat-operators-hfp4x\" (UID: \"64d5e8a4-00e0-4aae-988b-d10e5f36cae7\") " pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.084155 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8n6zc"] Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.085308 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.087837 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.096431 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8n6zc"] Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.140804 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9smm\" (UniqueName: \"kubernetes.io/projected/64d5e8a4-00e0-4aae-988b-d10e5f36cae7-kube-api-access-q9smm\") pod \"redhat-operators-hfp4x\" (UID: \"64d5e8a4-00e0-4aae-988b-d10e5f36cae7\") " pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.140877 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d5e8a4-00e0-4aae-988b-d10e5f36cae7-catalog-content\") pod \"redhat-operators-hfp4x\" (UID: \"64d5e8a4-00e0-4aae-988b-d10e5f36cae7\") " pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.140935 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d5e8a4-00e0-4aae-988b-d10e5f36cae7-utilities\") pod \"redhat-operators-hfp4x\" (UID: \"64d5e8a4-00e0-4aae-988b-d10e5f36cae7\") " pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.141318 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/64d5e8a4-00e0-4aae-988b-d10e5f36cae7-utilities\") pod \"redhat-operators-hfp4x\" (UID: \"64d5e8a4-00e0-4aae-988b-d10e5f36cae7\") " pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.141530 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/64d5e8a4-00e0-4aae-988b-d10e5f36cae7-catalog-content\") pod \"redhat-operators-hfp4x\" (UID: \"64d5e8a4-00e0-4aae-988b-d10e5f36cae7\") " pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.166068 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9smm\" (UniqueName: \"kubernetes.io/projected/64d5e8a4-00e0-4aae-988b-d10e5f36cae7-kube-api-access-q9smm\") pod \"redhat-operators-hfp4x\" (UID: \"64d5e8a4-00e0-4aae-988b-d10e5f36cae7\") " pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.242775 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-catalog-content\") pod \"certified-operators-8n6zc\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.242963 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-utilities\") pod \"certified-operators-8n6zc\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.243034 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz5ns\" (UniqueName: \"kubernetes.io/projected/477f5ec7-c491-494c-add6-a233798ffdfa-kube-api-access-jz5ns\") pod \"certified-operators-8n6zc\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.285192 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.343936 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-utilities\") pod \"certified-operators-8n6zc\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.344013 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jz5ns\" (UniqueName: \"kubernetes.io/projected/477f5ec7-c491-494c-add6-a233798ffdfa-kube-api-access-jz5ns\") pod \"certified-operators-8n6zc\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.344065 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-catalog-content\") pod \"certified-operators-8n6zc\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.344717 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-catalog-content\") pod \"certified-operators-8n6zc\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.344917 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-utilities\") pod \"certified-operators-8n6zc\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.363726 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jz5ns\" (UniqueName: \"kubernetes.io/projected/477f5ec7-c491-494c-add6-a233798ffdfa-kube-api-access-jz5ns\") pod \"certified-operators-8n6zc\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.386090 4804 generic.go:334] "Generic (PLEG): container finished" podID="91e77bd7-6a7b-4b91-b47d-61e61d157acb" containerID="ab1ed78c1d05a6ec47e45c17d34c73abafd388ef6ca139e5f120baefc9ffeb59" exitCode=0 Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.386202 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wbxgh" event={"ID":"91e77bd7-6a7b-4b91-b47d-61e61d157acb","Type":"ContainerDied","Data":"ab1ed78c1d05a6ec47e45c17d34c73abafd388ef6ca139e5f120baefc9ffeb59"} Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.388640 4804 generic.go:334] "Generic (PLEG): container finished" podID="7e326a9c-bf0f-4d43-87f0-f4c4e2667118" containerID="b35db1a1ff34ee952dcc074f0d6eefdc5c99af0e19ceed537d4718259de247de" exitCode=0 Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.388691 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfzfl" event={"ID":"7e326a9c-bf0f-4d43-87f0-f4c4e2667118","Type":"ContainerDied","Data":"b35db1a1ff34ee952dcc074f0d6eefdc5c99af0e19ceed537d4718259de247de"} Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.408821 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.501854 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hfp4x"] Jan 28 11:27:38 crc kubenswrapper[4804]: W0128 11:27:38.510139 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod64d5e8a4_00e0_4aae_988b_d10e5f36cae7.slice/crio-873e7513363f9fcae3cd3a724aba5f78273355b09948ce4f785a823d289eee7d WatchSource:0}: Error finding container 873e7513363f9fcae3cd3a724aba5f78273355b09948ce4f785a823d289eee7d: Status 404 returned error can't find the container with id 873e7513363f9fcae3cd3a724aba5f78273355b09948ce4f785a823d289eee7d Jan 28 11:27:38 crc kubenswrapper[4804]: I0128 11:27:38.789786 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8n6zc"] Jan 28 11:27:38 crc kubenswrapper[4804]: W0128 11:27:38.797192 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod477f5ec7_c491_494c_add6_a233798ffdfa.slice/crio-5666d32cb1791e32eb7a0f138a32a6994ac7508322ed412acb7afd87a03dcb18 WatchSource:0}: Error finding container 5666d32cb1791e32eb7a0f138a32a6994ac7508322ed412acb7afd87a03dcb18: Status 404 returned error can't find the container with id 5666d32cb1791e32eb7a0f138a32a6994ac7508322ed412acb7afd87a03dcb18 Jan 28 11:27:39 crc kubenswrapper[4804]: I0128 11:27:39.394954 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mfzfl" event={"ID":"7e326a9c-bf0f-4d43-87f0-f4c4e2667118","Type":"ContainerStarted","Data":"ac2864fcbdeba1e2f84d17b9ea054bc897d0e9de7a1e00ad13edbb198811ca36"} Jan 28 11:27:39 crc kubenswrapper[4804]: I0128 11:27:39.398702 4804 generic.go:334] "Generic (PLEG): container finished" podID="477f5ec7-c491-494c-add6-a233798ffdfa" containerID="97869d81e8512d2767849c948a0eaf69907f795ddaf291cb6977a857a679da98" exitCode=0 Jan 28 11:27:39 crc kubenswrapper[4804]: I0128 11:27:39.398767 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8n6zc" event={"ID":"477f5ec7-c491-494c-add6-a233798ffdfa","Type":"ContainerDied","Data":"97869d81e8512d2767849c948a0eaf69907f795ddaf291cb6977a857a679da98"} Jan 28 11:27:39 crc kubenswrapper[4804]: I0128 11:27:39.398796 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8n6zc" event={"ID":"477f5ec7-c491-494c-add6-a233798ffdfa","Type":"ContainerStarted","Data":"5666d32cb1791e32eb7a0f138a32a6994ac7508322ed412acb7afd87a03dcb18"} Jan 28 11:27:39 crc kubenswrapper[4804]: I0128 11:27:39.402492 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wbxgh" event={"ID":"91e77bd7-6a7b-4b91-b47d-61e61d157acb","Type":"ContainerStarted","Data":"a42ac5a1159848a8a66f9af3fde7993fcb0c35fa30816a1cfb4649ebec61d084"} Jan 28 11:27:39 crc kubenswrapper[4804]: I0128 11:27:39.404424 4804 generic.go:334] "Generic (PLEG): container finished" podID="64d5e8a4-00e0-4aae-988b-d10e5f36cae7" containerID="20b7980f00a6c53ee52c8489361ac28da3d88b28866ffa48c844fc6ceebb5e60" exitCode=0 Jan 28 11:27:39 crc kubenswrapper[4804]: I0128 11:27:39.404457 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfp4x" event={"ID":"64d5e8a4-00e0-4aae-988b-d10e5f36cae7","Type":"ContainerDied","Data":"20b7980f00a6c53ee52c8489361ac28da3d88b28866ffa48c844fc6ceebb5e60"} Jan 28 11:27:39 crc kubenswrapper[4804]: I0128 11:27:39.404478 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfp4x" event={"ID":"64d5e8a4-00e0-4aae-988b-d10e5f36cae7","Type":"ContainerStarted","Data":"873e7513363f9fcae3cd3a724aba5f78273355b09948ce4f785a823d289eee7d"} Jan 28 11:27:39 crc kubenswrapper[4804]: I0128 11:27:39.418731 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mfzfl" podStartSLOduration=2.885334306 podStartE2EDuration="4.418714538s" podCreationTimestamp="2026-01-28 11:27:35 +0000 UTC" firstStartedPulling="2026-01-28 11:27:37.3721794 +0000 UTC m=+333.167059384" lastFinishedPulling="2026-01-28 11:27:38.905559632 +0000 UTC m=+334.700439616" observedRunningTime="2026-01-28 11:27:39.414323982 +0000 UTC m=+335.209203986" watchObservedRunningTime="2026-01-28 11:27:39.418714538 +0000 UTC m=+335.213594522" Jan 28 11:27:39 crc kubenswrapper[4804]: I0128 11:27:39.473020 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wbxgh" podStartSLOduration=2.041442705 podStartE2EDuration="4.472994023s" podCreationTimestamp="2026-01-28 11:27:35 +0000 UTC" firstStartedPulling="2026-01-28 11:27:36.366088704 +0000 UTC m=+332.160968688" lastFinishedPulling="2026-01-28 11:27:38.797640022 +0000 UTC m=+334.592520006" observedRunningTime="2026-01-28 11:27:39.470732163 +0000 UTC m=+335.265612157" watchObservedRunningTime="2026-01-28 11:27:39.472994023 +0000 UTC m=+335.267874027" Jan 28 11:27:40 crc kubenswrapper[4804]: I0128 11:27:40.411817 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfp4x" event={"ID":"64d5e8a4-00e0-4aae-988b-d10e5f36cae7","Type":"ContainerStarted","Data":"6ae78751e4b835f46fe78c17bde8fdeb2658a62258da90def68924d61e9cc24d"} Jan 28 11:27:40 crc kubenswrapper[4804]: I0128 11:27:40.413829 4804 generic.go:334] "Generic (PLEG): container finished" podID="477f5ec7-c491-494c-add6-a233798ffdfa" containerID="5eeef8445a28c47bafd383bf532c0bbf3abc3e3acbe80741d1fb008b29abd5a7" exitCode=0 Jan 28 11:27:40 crc kubenswrapper[4804]: I0128 11:27:40.413907 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8n6zc" event={"ID":"477f5ec7-c491-494c-add6-a233798ffdfa","Type":"ContainerDied","Data":"5eeef8445a28c47bafd383bf532c0bbf3abc3e3acbe80741d1fb008b29abd5a7"} Jan 28 11:27:41 crc kubenswrapper[4804]: I0128 11:27:41.422049 4804 generic.go:334] "Generic (PLEG): container finished" podID="64d5e8a4-00e0-4aae-988b-d10e5f36cae7" containerID="6ae78751e4b835f46fe78c17bde8fdeb2658a62258da90def68924d61e9cc24d" exitCode=0 Jan 28 11:27:41 crc kubenswrapper[4804]: I0128 11:27:41.422164 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfp4x" event={"ID":"64d5e8a4-00e0-4aae-988b-d10e5f36cae7","Type":"ContainerDied","Data":"6ae78751e4b835f46fe78c17bde8fdeb2658a62258da90def68924d61e9cc24d"} Jan 28 11:27:41 crc kubenswrapper[4804]: I0128 11:27:41.425004 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8n6zc" event={"ID":"477f5ec7-c491-494c-add6-a233798ffdfa","Type":"ContainerStarted","Data":"8097ea45070d38453a6edb261d8ee6d04408f9d4cf265b8d012cfbbcf0aab862"} Jan 28 11:27:41 crc kubenswrapper[4804]: I0128 11:27:41.462689 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8n6zc" podStartSLOduration=2.035730719 podStartE2EDuration="3.462671757s" podCreationTimestamp="2026-01-28 11:27:38 +0000 UTC" firstStartedPulling="2026-01-28 11:27:39.399966676 +0000 UTC m=+335.194846670" lastFinishedPulling="2026-01-28 11:27:40.826907734 +0000 UTC m=+336.621787708" observedRunningTime="2026-01-28 11:27:41.45824855 +0000 UTC m=+337.253128524" watchObservedRunningTime="2026-01-28 11:27:41.462671757 +0000 UTC m=+337.257551741" Jan 28 11:27:42 crc kubenswrapper[4804]: I0128 11:27:42.582696 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:27:42 crc kubenswrapper[4804]: I0128 11:27:42.583261 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:27:44 crc kubenswrapper[4804]: I0128 11:27:44.453044 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hfp4x" event={"ID":"64d5e8a4-00e0-4aae-988b-d10e5f36cae7","Type":"ContainerStarted","Data":"d454ae9d3be5e6e97b5bc793769ffafee3651468a260dcdf014b2b36201218e9"} Jan 28 11:27:44 crc kubenswrapper[4804]: I0128 11:27:44.489079 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hfp4x" podStartSLOduration=3.547584105 podStartE2EDuration="7.489038427s" podCreationTimestamp="2026-01-28 11:27:37 +0000 UTC" firstStartedPulling="2026-01-28 11:27:39.405726345 +0000 UTC m=+335.200606329" lastFinishedPulling="2026-01-28 11:27:43.347180627 +0000 UTC m=+339.142060651" observedRunningTime="2026-01-28 11:27:44.486957062 +0000 UTC m=+340.281837086" watchObservedRunningTime="2026-01-28 11:27:44.489038427 +0000 UTC m=+340.283918541" Jan 28 11:27:45 crc kubenswrapper[4804]: I0128 11:27:45.824167 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:45 crc kubenswrapper[4804]: I0128 11:27:45.824625 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:45 crc kubenswrapper[4804]: I0128 11:27:45.873157 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:46 crc kubenswrapper[4804]: I0128 11:27:46.038435 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:46 crc kubenswrapper[4804]: I0128 11:27:46.038509 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:46 crc kubenswrapper[4804]: I0128 11:27:46.075116 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:46 crc kubenswrapper[4804]: I0128 11:27:46.512855 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mfzfl" Jan 28 11:27:46 crc kubenswrapper[4804]: I0128 11:27:46.529099 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wbxgh" Jan 28 11:27:48 crc kubenswrapper[4804]: I0128 11:27:48.285869 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:48 crc kubenswrapper[4804]: I0128 11:27:48.285986 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:48 crc kubenswrapper[4804]: I0128 11:27:48.409096 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:48 crc kubenswrapper[4804]: I0128 11:27:48.409187 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:48 crc kubenswrapper[4804]: I0128 11:27:48.455831 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:48 crc kubenswrapper[4804]: I0128 11:27:48.523127 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 11:27:49 crc kubenswrapper[4804]: I0128 11:27:49.325422 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hfp4x" podUID="64d5e8a4-00e0-4aae-988b-d10e5f36cae7" containerName="registry-server" probeResult="failure" output=< Jan 28 11:27:49 crc kubenswrapper[4804]: timeout: failed to connect service ":50051" within 1s Jan 28 11:27:49 crc kubenswrapper[4804]: > Jan 28 11:27:58 crc kubenswrapper[4804]: I0128 11:27:58.331398 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:27:58 crc kubenswrapper[4804]: I0128 11:27:58.371515 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hfp4x" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.217994 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jnbsp"] Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.219846 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.233566 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jnbsp"] Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.353242 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9123b082-c385-4b95-b3d7-581636f5dae3-bound-sa-token\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.353284 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9123b082-c385-4b95-b3d7-581636f5dae3-registry-certificates\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.353305 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9123b082-c385-4b95-b3d7-581636f5dae3-registry-tls\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.353471 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9123b082-c385-4b95-b3d7-581636f5dae3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.353524 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvbdk\" (UniqueName: \"kubernetes.io/projected/9123b082-c385-4b95-b3d7-581636f5dae3-kube-api-access-jvbdk\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.353639 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.353679 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9123b082-c385-4b95-b3d7-581636f5dae3-trusted-ca\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.353738 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9123b082-c385-4b95-b3d7-581636f5dae3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.385620 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.455353 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9123b082-c385-4b95-b3d7-581636f5dae3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.455842 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9123b082-c385-4b95-b3d7-581636f5dae3-bound-sa-token\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.455975 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9123b082-c385-4b95-b3d7-581636f5dae3-registry-certificates\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.456093 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9123b082-c385-4b95-b3d7-581636f5dae3-registry-tls\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.456206 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9123b082-c385-4b95-b3d7-581636f5dae3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.456297 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvbdk\" (UniqueName: \"kubernetes.io/projected/9123b082-c385-4b95-b3d7-581636f5dae3-kube-api-access-jvbdk\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.456421 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9123b082-c385-4b95-b3d7-581636f5dae3-trusted-ca\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.456686 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9123b082-c385-4b95-b3d7-581636f5dae3-ca-trust-extracted\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.457350 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9123b082-c385-4b95-b3d7-581636f5dae3-registry-certificates\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.457939 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9123b082-c385-4b95-b3d7-581636f5dae3-trusted-ca\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.462993 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9123b082-c385-4b95-b3d7-581636f5dae3-registry-tls\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.463000 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9123b082-c385-4b95-b3d7-581636f5dae3-installation-pull-secrets\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.473325 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvbdk\" (UniqueName: \"kubernetes.io/projected/9123b082-c385-4b95-b3d7-581636f5dae3-kube-api-access-jvbdk\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.476480 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9123b082-c385-4b95-b3d7-581636f5dae3-bound-sa-token\") pod \"image-registry-66df7c8f76-jnbsp\" (UID: \"9123b082-c385-4b95-b3d7-581636f5dae3\") " pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:06 crc kubenswrapper[4804]: I0128 11:28:06.534943 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:07 crc kubenswrapper[4804]: I0128 11:28:07.420236 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-jnbsp"] Jan 28 11:28:07 crc kubenswrapper[4804]: W0128 11:28:07.422017 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9123b082_c385_4b95_b3d7_581636f5dae3.slice/crio-c644cbbe0a4bcf135fc2541ffa05d08c12850bbf8f8731b5d5917f1edcc694d1 WatchSource:0}: Error finding container c644cbbe0a4bcf135fc2541ffa05d08c12850bbf8f8731b5d5917f1edcc694d1: Status 404 returned error can't find the container with id c644cbbe0a4bcf135fc2541ffa05d08c12850bbf8f8731b5d5917f1edcc694d1 Jan 28 11:28:08 crc kubenswrapper[4804]: I0128 11:28:08.175052 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" event={"ID":"9123b082-c385-4b95-b3d7-581636f5dae3","Type":"ContainerStarted","Data":"c644cbbe0a4bcf135fc2541ffa05d08c12850bbf8f8731b5d5917f1edcc694d1"} Jan 28 11:28:09 crc kubenswrapper[4804]: I0128 11:28:09.180577 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" event={"ID":"9123b082-c385-4b95-b3d7-581636f5dae3","Type":"ContainerStarted","Data":"d4e4ca4f31104bfe30f08bfff7f58688eba391347cfbc1478433ee3646138d47"} Jan 28 11:28:09 crc kubenswrapper[4804]: I0128 11:28:09.182197 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:09 crc kubenswrapper[4804]: I0128 11:28:09.201872 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" podStartSLOduration=3.201856104 podStartE2EDuration="3.201856104s" podCreationTimestamp="2026-01-28 11:28:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:28:09.198384396 +0000 UTC m=+364.993264380" watchObservedRunningTime="2026-01-28 11:28:09.201856104 +0000 UTC m=+364.996736088" Jan 28 11:28:12 crc kubenswrapper[4804]: I0128 11:28:12.582531 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:28:12 crc kubenswrapper[4804]: I0128 11:28:12.582607 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:28:26 crc kubenswrapper[4804]: I0128 11:28:26.539701 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-jnbsp" Jan 28 11:28:26 crc kubenswrapper[4804]: I0128 11:28:26.602493 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-src4s"] Jan 28 11:28:42 crc kubenswrapper[4804]: I0128 11:28:42.582306 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:28:42 crc kubenswrapper[4804]: I0128 11:28:42.583086 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:28:42 crc kubenswrapper[4804]: I0128 11:28:42.583171 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:28:42 crc kubenswrapper[4804]: I0128 11:28:42.583832 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d6bd6423ac842a17ff5659b7f0672fd055e5689dc54e8deaa66167b5157cd76e"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 11:28:42 crc kubenswrapper[4804]: I0128 11:28:42.583908 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://d6bd6423ac842a17ff5659b7f0672fd055e5689dc54e8deaa66167b5157cd76e" gracePeriod=600 Jan 28 11:28:43 crc kubenswrapper[4804]: I0128 11:28:43.391608 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="d6bd6423ac842a17ff5659b7f0672fd055e5689dc54e8deaa66167b5157cd76e" exitCode=0 Jan 28 11:28:43 crc kubenswrapper[4804]: I0128 11:28:43.391700 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"d6bd6423ac842a17ff5659b7f0672fd055e5689dc54e8deaa66167b5157cd76e"} Jan 28 11:28:43 crc kubenswrapper[4804]: I0128 11:28:43.392277 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"c7b5b9b6b8ef791eab510f91481d8192b718ad6748767af1fa3c3c5a88adba6c"} Jan 28 11:28:43 crc kubenswrapper[4804]: I0128 11:28:43.392313 4804 scope.go:117] "RemoveContainer" containerID="3a4163c8eeb5b5f948f9997c0fefdfbc7d381fae13a53567d9971adbf6ca87c5" Jan 28 11:28:51 crc kubenswrapper[4804]: I0128 11:28:51.644710 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" podUID="436e3017-a787-4e60-97cd-7cc0cdd47a2d" containerName="registry" containerID="cri-o://83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191" gracePeriod=30 Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.076252 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.161719 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/436e3017-a787-4e60-97cd-7cc0cdd47a2d-ca-trust-extracted\") pod \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.161787 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-bound-sa-token\") pod \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.161849 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-tls\") pod \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.161962 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/436e3017-a787-4e60-97cd-7cc0cdd47a2d-installation-pull-secrets\") pod \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.162034 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnf5b\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-kube-api-access-mnf5b\") pod \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.162092 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-trusted-ca\") pod \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.163034 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "436e3017-a787-4e60-97cd-7cc0cdd47a2d" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.163468 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.163683 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-certificates\") pod \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\" (UID: \"436e3017-a787-4e60-97cd-7cc0cdd47a2d\") " Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.164119 4804 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.164824 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "436e3017-a787-4e60-97cd-7cc0cdd47a2d" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.173369 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/436e3017-a787-4e60-97cd-7cc0cdd47a2d-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "436e3017-a787-4e60-97cd-7cc0cdd47a2d" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.174032 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "436e3017-a787-4e60-97cd-7cc0cdd47a2d" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.174290 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-kube-api-access-mnf5b" (OuterVolumeSpecName: "kube-api-access-mnf5b") pod "436e3017-a787-4e60-97cd-7cc0cdd47a2d" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d"). InnerVolumeSpecName "kube-api-access-mnf5b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.174740 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "436e3017-a787-4e60-97cd-7cc0cdd47a2d" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.178706 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "436e3017-a787-4e60-97cd-7cc0cdd47a2d" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.201097 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/436e3017-a787-4e60-97cd-7cc0cdd47a2d-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "436e3017-a787-4e60-97cd-7cc0cdd47a2d" (UID: "436e3017-a787-4e60-97cd-7cc0cdd47a2d"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.264935 4804 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.264984 4804 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/436e3017-a787-4e60-97cd-7cc0cdd47a2d-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.265008 4804 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.265022 4804 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.265036 4804 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/436e3017-a787-4e60-97cd-7cc0cdd47a2d-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.265049 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnf5b\" (UniqueName: \"kubernetes.io/projected/436e3017-a787-4e60-97cd-7cc0cdd47a2d-kube-api-access-mnf5b\") on node \"crc\" DevicePath \"\"" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.453255 4804 generic.go:334] "Generic (PLEG): container finished" podID="436e3017-a787-4e60-97cd-7cc0cdd47a2d" containerID="83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191" exitCode=0 Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.453305 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" event={"ID":"436e3017-a787-4e60-97cd-7cc0cdd47a2d","Type":"ContainerDied","Data":"83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191"} Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.453337 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" event={"ID":"436e3017-a787-4e60-97cd-7cc0cdd47a2d","Type":"ContainerDied","Data":"21c407385a0e63e468749b798e82d759e0bd8cab55527e3595f2c32049181c1c"} Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.453357 4804 scope.go:117] "RemoveContainer" containerID="83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.453478 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-src4s" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.475209 4804 scope.go:117] "RemoveContainer" containerID="83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191" Jan 28 11:28:52 crc kubenswrapper[4804]: E0128 11:28:52.477196 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191\": container with ID starting with 83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191 not found: ID does not exist" containerID="83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.477232 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191"} err="failed to get container status \"83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191\": rpc error: code = NotFound desc = could not find container \"83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191\": container with ID starting with 83fe2e5ba10b37c065c911165210c1b47e88b589ee56f04e8cdc1314e1a78191 not found: ID does not exist" Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.485047 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-src4s"] Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.490427 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-src4s"] Jan 28 11:28:52 crc kubenswrapper[4804]: I0128 11:28:52.923213 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="436e3017-a787-4e60-97cd-7cc0cdd47a2d" path="/var/lib/kubelet/pods/436e3017-a787-4e60-97cd-7cc0cdd47a2d/volumes" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.181554 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5"] Jan 28 11:30:00 crc kubenswrapper[4804]: E0128 11:30:00.182408 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="436e3017-a787-4e60-97cd-7cc0cdd47a2d" containerName="registry" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.182426 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="436e3017-a787-4e60-97cd-7cc0cdd47a2d" containerName="registry" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.182559 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="436e3017-a787-4e60-97cd-7cc0cdd47a2d" containerName="registry" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.183025 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.185682 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.189372 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.191205 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5"] Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.337137 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83929dab-2f27-41a0-aaea-ec500ff4b6e7-secret-volume\") pod \"collect-profiles-29493330-gcdc5\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.337319 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83929dab-2f27-41a0-aaea-ec500ff4b6e7-config-volume\") pod \"collect-profiles-29493330-gcdc5\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.337351 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqx8n\" (UniqueName: \"kubernetes.io/projected/83929dab-2f27-41a0-aaea-ec500ff4b6e7-kube-api-access-fqx8n\") pod \"collect-profiles-29493330-gcdc5\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.438493 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83929dab-2f27-41a0-aaea-ec500ff4b6e7-secret-volume\") pod \"collect-profiles-29493330-gcdc5\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.438572 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83929dab-2f27-41a0-aaea-ec500ff4b6e7-config-volume\") pod \"collect-profiles-29493330-gcdc5\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.438594 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqx8n\" (UniqueName: \"kubernetes.io/projected/83929dab-2f27-41a0-aaea-ec500ff4b6e7-kube-api-access-fqx8n\") pod \"collect-profiles-29493330-gcdc5\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.440976 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83929dab-2f27-41a0-aaea-ec500ff4b6e7-config-volume\") pod \"collect-profiles-29493330-gcdc5\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.447033 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83929dab-2f27-41a0-aaea-ec500ff4b6e7-secret-volume\") pod \"collect-profiles-29493330-gcdc5\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.453740 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqx8n\" (UniqueName: \"kubernetes.io/projected/83929dab-2f27-41a0-aaea-ec500ff4b6e7-kube-api-access-fqx8n\") pod \"collect-profiles-29493330-gcdc5\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.509100 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.704287 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5"] Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.876236 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" event={"ID":"83929dab-2f27-41a0-aaea-ec500ff4b6e7","Type":"ContainerStarted","Data":"647a49fa2b0ef181a7c4caad26f72973e736662092d9439165eb23246f60d551"} Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.876569 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" event={"ID":"83929dab-2f27-41a0-aaea-ec500ff4b6e7","Type":"ContainerStarted","Data":"6757e84d2e7c8383064f3a041216b2a08f26224137009b805ed7b77f7c0e10c3"} Jan 28 11:30:00 crc kubenswrapper[4804]: I0128 11:30:00.894390 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" podStartSLOduration=0.894375044 podStartE2EDuration="894.375044ms" podCreationTimestamp="2026-01-28 11:30:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:30:00.891997898 +0000 UTC m=+476.686877892" watchObservedRunningTime="2026-01-28 11:30:00.894375044 +0000 UTC m=+476.689255038" Jan 28 11:30:01 crc kubenswrapper[4804]: I0128 11:30:01.883987 4804 generic.go:334] "Generic (PLEG): container finished" podID="83929dab-2f27-41a0-aaea-ec500ff4b6e7" containerID="647a49fa2b0ef181a7c4caad26f72973e736662092d9439165eb23246f60d551" exitCode=0 Jan 28 11:30:01 crc kubenswrapper[4804]: I0128 11:30:01.884042 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" event={"ID":"83929dab-2f27-41a0-aaea-ec500ff4b6e7","Type":"ContainerDied","Data":"647a49fa2b0ef181a7c4caad26f72973e736662092d9439165eb23246f60d551"} Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.120833 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.275406 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83929dab-2f27-41a0-aaea-ec500ff4b6e7-config-volume\") pod \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.275514 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83929dab-2f27-41a0-aaea-ec500ff4b6e7-secret-volume\") pod \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.275561 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqx8n\" (UniqueName: \"kubernetes.io/projected/83929dab-2f27-41a0-aaea-ec500ff4b6e7-kube-api-access-fqx8n\") pod \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\" (UID: \"83929dab-2f27-41a0-aaea-ec500ff4b6e7\") " Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.276416 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83929dab-2f27-41a0-aaea-ec500ff4b6e7-config-volume" (OuterVolumeSpecName: "config-volume") pod "83929dab-2f27-41a0-aaea-ec500ff4b6e7" (UID: "83929dab-2f27-41a0-aaea-ec500ff4b6e7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.281441 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83929dab-2f27-41a0-aaea-ec500ff4b6e7-kube-api-access-fqx8n" (OuterVolumeSpecName: "kube-api-access-fqx8n") pod "83929dab-2f27-41a0-aaea-ec500ff4b6e7" (UID: "83929dab-2f27-41a0-aaea-ec500ff4b6e7"). InnerVolumeSpecName "kube-api-access-fqx8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.281524 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83929dab-2f27-41a0-aaea-ec500ff4b6e7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "83929dab-2f27-41a0-aaea-ec500ff4b6e7" (UID: "83929dab-2f27-41a0-aaea-ec500ff4b6e7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.377821 4804 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/83929dab-2f27-41a0-aaea-ec500ff4b6e7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.377858 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqx8n\" (UniqueName: \"kubernetes.io/projected/83929dab-2f27-41a0-aaea-ec500ff4b6e7-kube-api-access-fqx8n\") on node \"crc\" DevicePath \"\"" Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.377869 4804 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83929dab-2f27-41a0-aaea-ec500ff4b6e7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.898439 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" event={"ID":"83929dab-2f27-41a0-aaea-ec500ff4b6e7","Type":"ContainerDied","Data":"6757e84d2e7c8383064f3a041216b2a08f26224137009b805ed7b77f7c0e10c3"} Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.898669 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6757e84d2e7c8383064f3a041216b2a08f26224137009b805ed7b77f7c0e10c3" Jan 28 11:30:03 crc kubenswrapper[4804]: I0128 11:30:03.898500 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5" Jan 28 11:30:42 crc kubenswrapper[4804]: I0128 11:30:42.582244 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:30:42 crc kubenswrapper[4804]: I0128 11:30:42.582825 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:31:12 crc kubenswrapper[4804]: I0128 11:31:12.582091 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:31:12 crc kubenswrapper[4804]: I0128 11:31:12.583952 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:31:42 crc kubenswrapper[4804]: I0128 11:31:42.582081 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:31:42 crc kubenswrapper[4804]: I0128 11:31:42.582680 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:31:42 crc kubenswrapper[4804]: I0128 11:31:42.582725 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:31:42 crc kubenswrapper[4804]: I0128 11:31:42.583257 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c7b5b9b6b8ef791eab510f91481d8192b718ad6748767af1fa3c3c5a88adba6c"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 11:31:42 crc kubenswrapper[4804]: I0128 11:31:42.583311 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://c7b5b9b6b8ef791eab510f91481d8192b718ad6748767af1fa3c3c5a88adba6c" gracePeriod=600 Jan 28 11:31:43 crc kubenswrapper[4804]: I0128 11:31:43.480436 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="c7b5b9b6b8ef791eab510f91481d8192b718ad6748767af1fa3c3c5a88adba6c" exitCode=0 Jan 28 11:31:43 crc kubenswrapper[4804]: I0128 11:31:43.480523 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"c7b5b9b6b8ef791eab510f91481d8192b718ad6748767af1fa3c3c5a88adba6c"} Jan 28 11:31:43 crc kubenswrapper[4804]: I0128 11:31:43.480765 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"493f3a58ce9c84e61c12f35c4be8ff28af2862f186b7fdf44e3a4a848a20107b"} Jan 28 11:31:43 crc kubenswrapper[4804]: I0128 11:31:43.480789 4804 scope.go:117] "RemoveContainer" containerID="d6bd6423ac842a17ff5659b7f0672fd055e5689dc54e8deaa66167b5157cd76e" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.236917 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-24gvs"] Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.238700 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovn-controller" containerID="cri-o://e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6" gracePeriod=30 Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.239176 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="sbdb" containerID="cri-o://895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce" gracePeriod=30 Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.239295 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="nbdb" containerID="cri-o://a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18" gracePeriod=30 Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.239395 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="northd" containerID="cri-o://035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897" gracePeriod=30 Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.239487 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c" gracePeriod=30 Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.239576 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="kube-rbac-proxy-node" containerID="cri-o://d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e" gracePeriod=30 Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.239663 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovn-acl-logging" containerID="cri-o://3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc" gracePeriod=30 Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.320620 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" containerID="cri-o://178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d" gracePeriod=30 Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.586404 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/3.log" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.589282 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovn-acl-logging/0.log" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.590013 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovn-controller/0.log" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.592465 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643283 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6qqcq"] Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643525 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="kube-rbac-proxy-node" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643546 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="kube-rbac-proxy-node" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643561 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="sbdb" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643570 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="sbdb" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643580 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="kubecfg-setup" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643588 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="kubecfg-setup" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643595 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643602 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643612 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643619 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643628 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovn-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643637 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovn-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643650 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovn-acl-logging" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643659 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovn-acl-logging" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643671 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="nbdb" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643677 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="nbdb" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643688 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643696 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643705 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643712 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643723 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643730 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643743 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="northd" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643750 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="northd" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.643760 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83929dab-2f27-41a0-aaea-ec500ff4b6e7" containerName="collect-profiles" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643768 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="83929dab-2f27-41a0-aaea-ec500ff4b6e7" containerName="collect-profiles" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643870 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="83929dab-2f27-41a0-aaea-ec500ff4b6e7" containerName="collect-profiles" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643885 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="kube-rbac-proxy-ovn-metrics" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643911 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643919 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="kube-rbac-proxy-node" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643927 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643935 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="northd" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643944 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="sbdb" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643951 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643957 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovn-acl-logging" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643965 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="nbdb" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.643972 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovn-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: E0128 11:33:24.644056 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.644064 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.644147 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.644332 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerName="ovnkube-controller" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.645732 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.716915 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-log-socket\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.716985 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-systemd\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717010 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-systemd-units\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717058 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717058 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-log-socket" (OuterVolumeSpecName: "log-socket") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717150 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-bin\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717207 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717449 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-55hnp\" (UniqueName: \"kubernetes.io/projected/686039c6-ae16-45ac-bb9f-4c39d57d6c80-kube-api-access-55hnp\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717498 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-ovn\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717525 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-etc-openvswitch\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717552 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-var-lib-cni-networks-ovn-kubernetes\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717585 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-ovn-kubernetes\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717612 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-config\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717600 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717638 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-kubelet\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717667 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovn-node-metrics-cert\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717696 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-netns\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717721 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-var-lib-openvswitch\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717753 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-env-overrides\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717776 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-script-lib\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717800 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-netd\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717819 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-slash\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717839 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-openvswitch\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717868 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-node-log\") pod \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\" (UID: \"686039c6-ae16-45ac-bb9f-4c39d57d6c80\") " Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718049 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-cni-netd\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718083 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-kubelet\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718102 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-run-ovn-kubernetes\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718127 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/241322ad-bbc4-487d-9bd6-58659d5b9882-env-overrides\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718155 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-log-socket\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718187 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-run-openvswitch\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718205 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-node-log\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718229 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718257 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/241322ad-bbc4-487d-9bd6-58659d5b9882-ovn-node-metrics-cert\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718278 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-etc-openvswitch\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717667 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718300 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/241322ad-bbc4-487d-9bd6-58659d5b9882-ovnkube-config\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.717676 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718221 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718247 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718271 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718295 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-node-log" (OuterVolumeSpecName: "node-log") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718367 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718395 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718440 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-slash" (OuterVolumeSpecName: "host-slash") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718469 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718494 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718332 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-var-lib-openvswitch\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718534 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbfqd\" (UniqueName: \"kubernetes.io/projected/241322ad-bbc4-487d-9bd6-58659d5b9882-kube-api-access-wbfqd\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718562 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-systemd-units\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718582 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/241322ad-bbc4-487d-9bd6-58659d5b9882-ovnkube-script-lib\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718603 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-run-netns\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718629 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718633 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-run-ovn\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718722 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-run-systemd\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718793 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-slash\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718841 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-cni-bin\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718868 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718942 4804 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718955 4804 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718966 4804 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718976 4804 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718985 4804 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.718993 4804 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-slash\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.719002 4804 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-node-log\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.719011 4804 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-log-socket\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.719020 4804 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.719037 4804 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.719048 4804 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.719056 4804 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.719067 4804 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.719090 4804 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.719103 4804 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.719116 4804 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.723539 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/686039c6-ae16-45ac-bb9f-4c39d57d6c80-kube-api-access-55hnp" (OuterVolumeSpecName: "kube-api-access-55hnp") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "kube-api-access-55hnp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.723816 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.732200 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "686039c6-ae16-45ac-bb9f-4c39d57d6c80" (UID: "686039c6-ae16-45ac-bb9f-4c39d57d6c80"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820222 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-node-log\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820365 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-node-log\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820646 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-run-openvswitch\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820699 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-run-openvswitch\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820742 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820781 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/241322ad-bbc4-487d-9bd6-58659d5b9882-ovn-node-metrics-cert\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820809 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-etc-openvswitch\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820826 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/241322ad-bbc4-487d-9bd6-58659d5b9882-ovnkube-config\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820873 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-var-lib-openvswitch\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820911 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-etc-openvswitch\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820925 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wbfqd\" (UniqueName: \"kubernetes.io/projected/241322ad-bbc4-487d-9bd6-58659d5b9882-kube-api-access-wbfqd\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.820981 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-systemd-units\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821001 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/241322ad-bbc4-487d-9bd6-58659d5b9882-ovnkube-script-lib\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821033 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-run-netns\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821074 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-run-ovn\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821102 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-run-systemd\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821135 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-slash\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821166 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-cni-bin\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821199 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-cni-netd\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821210 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-run-netns\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821230 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-kubelet\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821253 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-systemd-units\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821253 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-run-ovn-kubernetes\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821277 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-run-ovn-kubernetes\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821295 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/241322ad-bbc4-487d-9bd6-58659d5b9882-env-overrides\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821310 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-run-ovn\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821334 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-log-socket\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821339 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-run-systemd\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821365 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-slash\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821405 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-kubelet\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821544 4804 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/686039c6-ae16-45ac-bb9f-4c39d57d6c80-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821570 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-log-socket\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821738 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821801 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-var-lib-openvswitch\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821831 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-55hnp\" (UniqueName: \"kubernetes.io/projected/686039c6-ae16-45ac-bb9f-4c39d57d6c80-kube-api-access-55hnp\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821804 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-cni-bin\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821837 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/241322ad-bbc4-487d-9bd6-58659d5b9882-host-cni-netd\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821929 4804 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.821952 4804 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/686039c6-ae16-45ac-bb9f-4c39d57d6c80-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.822161 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/241322ad-bbc4-487d-9bd6-58659d5b9882-env-overrides\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.822259 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/241322ad-bbc4-487d-9bd6-58659d5b9882-ovnkube-script-lib\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.822630 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/241322ad-bbc4-487d-9bd6-58659d5b9882-ovnkube-config\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.826370 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/241322ad-bbc4-487d-9bd6-58659d5b9882-ovn-node-metrics-cert\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.843997 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wbfqd\" (UniqueName: \"kubernetes.io/projected/241322ad-bbc4-487d-9bd6-58659d5b9882-kube-api-access-wbfqd\") pod \"ovnkube-node-6qqcq\" (UID: \"241322ad-bbc4-487d-9bd6-58659d5b9882\") " pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:24 crc kubenswrapper[4804]: I0128 11:33:24.963861 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.172283 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovnkube-controller/3.log" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.174511 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovn-acl-logging/0.log" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.174979 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-24gvs_686039c6-ae16-45ac-bb9f-4c39d57d6c80/ovn-controller/0.log" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175314 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d" exitCode=0 Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175343 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce" exitCode=0 Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175353 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18" exitCode=0 Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175363 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897" exitCode=0 Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175372 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c" exitCode=0 Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175379 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e" exitCode=0 Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175387 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc" exitCode=143 Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175394 4804 generic.go:334] "Generic (PLEG): container finished" podID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" containerID="e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6" exitCode=143 Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175439 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175470 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175485 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175497 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175508 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175519 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175533 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175545 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175552 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175558 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175563 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175570 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175576 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175583 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175589 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175596 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175606 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175615 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175623 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175629 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175635 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175643 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175650 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175657 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175664 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175671 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175680 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175691 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175699 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175706 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175713 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175720 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175727 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175733 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175740 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175746 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175753 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175761 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" event={"ID":"686039c6-ae16-45ac-bb9f-4c39d57d6c80","Type":"ContainerDied","Data":"008989ec311365ac3135e782553f5de3886fb749e9f1bd87d34281455159c3df"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175771 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175779 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175786 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175793 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175799 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175805 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175811 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175817 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175824 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175830 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.175846 4804 scope.go:117] "RemoveContainer" containerID="178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.176015 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-24gvs" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.178981 4804 generic.go:334] "Generic (PLEG): container finished" podID="241322ad-bbc4-487d-9bd6-58659d5b9882" containerID="a1da0c80ef0c07fe35e93d7bc475becacbeafb7b7d255d553ad8e0602eeda221" exitCode=0 Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.179063 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" event={"ID":"241322ad-bbc4-487d-9bd6-58659d5b9882","Type":"ContainerDied","Data":"a1da0c80ef0c07fe35e93d7bc475becacbeafb7b7d255d553ad8e0602eeda221"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.179108 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" event={"ID":"241322ad-bbc4-487d-9bd6-58659d5b9882","Type":"ContainerStarted","Data":"45240fb9064957fadccc3ca7bd1954a047d59e304e671c2c3ffcfcb98b1d6d6c"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.183816 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqqmt_735b7edc-6f8b-4f5f-a9ca-11964dd78266/kube-multus/2.log" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.185594 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqqmt_735b7edc-6f8b-4f5f-a9ca-11964dd78266/kube-multus/1.log" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.185764 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqqmt" event={"ID":"735b7edc-6f8b-4f5f-a9ca-11964dd78266","Type":"ContainerDied","Data":"c01bb0098ca9990666b7c354aacae06dac49b570cdc5308064b10a0988abe4cb"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.185652 4804 generic.go:334] "Generic (PLEG): container finished" podID="735b7edc-6f8b-4f5f-a9ca-11964dd78266" containerID="c01bb0098ca9990666b7c354aacae06dac49b570cdc5308064b10a0988abe4cb" exitCode=2 Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.185851 4804 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"888abd8066feec1a58a78cfc0c77f1634db2fc87ed5237703a224ace3d78ee8d"} Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.186468 4804 scope.go:117] "RemoveContainer" containerID="c01bb0098ca9990666b7c354aacae06dac49b570cdc5308064b10a0988abe4cb" Jan 28 11:33:25 crc kubenswrapper[4804]: E0128 11:33:25.186792 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lqqmt_openshift-multus(735b7edc-6f8b-4f5f-a9ca-11964dd78266)\"" pod="openshift-multus/multus-lqqmt" podUID="735b7edc-6f8b-4f5f-a9ca-11964dd78266" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.213130 4804 scope.go:117] "RemoveContainer" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.238701 4804 scope.go:117] "RemoveContainer" containerID="895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.264298 4804 scope.go:117] "RemoveContainer" containerID="a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.276184 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-24gvs"] Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.283986 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-24gvs"] Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.289069 4804 scope.go:117] "RemoveContainer" containerID="035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.303316 4804 scope.go:117] "RemoveContainer" containerID="12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.316010 4804 scope.go:117] "RemoveContainer" containerID="d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.331133 4804 scope.go:117] "RemoveContainer" containerID="3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.346646 4804 scope.go:117] "RemoveContainer" containerID="e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.431349 4804 scope.go:117] "RemoveContainer" containerID="048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.464544 4804 scope.go:117] "RemoveContainer" containerID="178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d" Jan 28 11:33:25 crc kubenswrapper[4804]: E0128 11:33:25.465061 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d\": container with ID starting with 178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d not found: ID does not exist" containerID="178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.465106 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d"} err="failed to get container status \"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d\": rpc error: code = NotFound desc = could not find container \"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d\": container with ID starting with 178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.465138 4804 scope.go:117] "RemoveContainer" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:33:25 crc kubenswrapper[4804]: E0128 11:33:25.465630 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\": container with ID starting with 3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e not found: ID does not exist" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.465671 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e"} err="failed to get container status \"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\": rpc error: code = NotFound desc = could not find container \"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\": container with ID starting with 3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.465697 4804 scope.go:117] "RemoveContainer" containerID="895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce" Jan 28 11:33:25 crc kubenswrapper[4804]: E0128 11:33:25.466045 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\": container with ID starting with 895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce not found: ID does not exist" containerID="895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.466071 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce"} err="failed to get container status \"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\": rpc error: code = NotFound desc = could not find container \"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\": container with ID starting with 895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.466088 4804 scope.go:117] "RemoveContainer" containerID="a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18" Jan 28 11:33:25 crc kubenswrapper[4804]: E0128 11:33:25.466413 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\": container with ID starting with a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18 not found: ID does not exist" containerID="a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.466441 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18"} err="failed to get container status \"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\": rpc error: code = NotFound desc = could not find container \"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\": container with ID starting with a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.466476 4804 scope.go:117] "RemoveContainer" containerID="035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897" Jan 28 11:33:25 crc kubenswrapper[4804]: E0128 11:33:25.466784 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\": container with ID starting with 035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897 not found: ID does not exist" containerID="035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.466809 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897"} err="failed to get container status \"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\": rpc error: code = NotFound desc = could not find container \"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\": container with ID starting with 035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.466823 4804 scope.go:117] "RemoveContainer" containerID="12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c" Jan 28 11:33:25 crc kubenswrapper[4804]: E0128 11:33:25.467116 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\": container with ID starting with 12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c not found: ID does not exist" containerID="12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.467144 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c"} err="failed to get container status \"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\": rpc error: code = NotFound desc = could not find container \"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\": container with ID starting with 12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.467162 4804 scope.go:117] "RemoveContainer" containerID="d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e" Jan 28 11:33:25 crc kubenswrapper[4804]: E0128 11:33:25.467467 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\": container with ID starting with d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e not found: ID does not exist" containerID="d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.467492 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e"} err="failed to get container status \"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\": rpc error: code = NotFound desc = could not find container \"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\": container with ID starting with d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.467506 4804 scope.go:117] "RemoveContainer" containerID="3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc" Jan 28 11:33:25 crc kubenswrapper[4804]: E0128 11:33:25.467823 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\": container with ID starting with 3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc not found: ID does not exist" containerID="3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.467845 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc"} err="failed to get container status \"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\": rpc error: code = NotFound desc = could not find container \"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\": container with ID starting with 3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.467857 4804 scope.go:117] "RemoveContainer" containerID="e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6" Jan 28 11:33:25 crc kubenswrapper[4804]: E0128 11:33:25.468188 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\": container with ID starting with e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6 not found: ID does not exist" containerID="e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.468213 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6"} err="failed to get container status \"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\": rpc error: code = NotFound desc = could not find container \"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\": container with ID starting with e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.468230 4804 scope.go:117] "RemoveContainer" containerID="048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03" Jan 28 11:33:25 crc kubenswrapper[4804]: E0128 11:33:25.468476 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\": container with ID starting with 048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03 not found: ID does not exist" containerID="048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.468501 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03"} err="failed to get container status \"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\": rpc error: code = NotFound desc = could not find container \"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\": container with ID starting with 048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.468517 4804 scope.go:117] "RemoveContainer" containerID="178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.468775 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d"} err="failed to get container status \"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d\": rpc error: code = NotFound desc = could not find container \"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d\": container with ID starting with 178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.468794 4804 scope.go:117] "RemoveContainer" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.469052 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e"} err="failed to get container status \"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\": rpc error: code = NotFound desc = could not find container \"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\": container with ID starting with 3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.469075 4804 scope.go:117] "RemoveContainer" containerID="895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.470206 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce"} err="failed to get container status \"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\": rpc error: code = NotFound desc = could not find container \"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\": container with ID starting with 895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.470227 4804 scope.go:117] "RemoveContainer" containerID="a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.470652 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18"} err="failed to get container status \"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\": rpc error: code = NotFound desc = could not find container \"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\": container with ID starting with a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.470683 4804 scope.go:117] "RemoveContainer" containerID="035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.470978 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897"} err="failed to get container status \"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\": rpc error: code = NotFound desc = could not find container \"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\": container with ID starting with 035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.471003 4804 scope.go:117] "RemoveContainer" containerID="12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.471262 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c"} err="failed to get container status \"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\": rpc error: code = NotFound desc = could not find container \"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\": container with ID starting with 12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.471294 4804 scope.go:117] "RemoveContainer" containerID="d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.471540 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e"} err="failed to get container status \"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\": rpc error: code = NotFound desc = could not find container \"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\": container with ID starting with d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.471564 4804 scope.go:117] "RemoveContainer" containerID="3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.471906 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc"} err="failed to get container status \"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\": rpc error: code = NotFound desc = could not find container \"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\": container with ID starting with 3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.471935 4804 scope.go:117] "RemoveContainer" containerID="e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.472216 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6"} err="failed to get container status \"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\": rpc error: code = NotFound desc = could not find container \"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\": container with ID starting with e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.472236 4804 scope.go:117] "RemoveContainer" containerID="048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.472477 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03"} err="failed to get container status \"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\": rpc error: code = NotFound desc = could not find container \"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\": container with ID starting with 048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.472499 4804 scope.go:117] "RemoveContainer" containerID="178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.472846 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d"} err="failed to get container status \"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d\": rpc error: code = NotFound desc = could not find container \"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d\": container with ID starting with 178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.472867 4804 scope.go:117] "RemoveContainer" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.473203 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e"} err="failed to get container status \"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\": rpc error: code = NotFound desc = could not find container \"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\": container with ID starting with 3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.473227 4804 scope.go:117] "RemoveContainer" containerID="895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.473538 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce"} err="failed to get container status \"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\": rpc error: code = NotFound desc = could not find container \"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\": container with ID starting with 895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.473560 4804 scope.go:117] "RemoveContainer" containerID="a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.473900 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18"} err="failed to get container status \"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\": rpc error: code = NotFound desc = could not find container \"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\": container with ID starting with a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.473918 4804 scope.go:117] "RemoveContainer" containerID="035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.474257 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897"} err="failed to get container status \"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\": rpc error: code = NotFound desc = could not find container \"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\": container with ID starting with 035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.474283 4804 scope.go:117] "RemoveContainer" containerID="12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.474620 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c"} err="failed to get container status \"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\": rpc error: code = NotFound desc = could not find container \"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\": container with ID starting with 12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.474641 4804 scope.go:117] "RemoveContainer" containerID="d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.474929 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e"} err="failed to get container status \"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\": rpc error: code = NotFound desc = could not find container \"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\": container with ID starting with d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.474947 4804 scope.go:117] "RemoveContainer" containerID="3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.475412 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc"} err="failed to get container status \"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\": rpc error: code = NotFound desc = could not find container \"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\": container with ID starting with 3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.475438 4804 scope.go:117] "RemoveContainer" containerID="e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.475690 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6"} err="failed to get container status \"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\": rpc error: code = NotFound desc = could not find container \"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\": container with ID starting with e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.475709 4804 scope.go:117] "RemoveContainer" containerID="048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.475975 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03"} err="failed to get container status \"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\": rpc error: code = NotFound desc = could not find container \"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\": container with ID starting with 048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.476000 4804 scope.go:117] "RemoveContainer" containerID="178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.476294 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d"} err="failed to get container status \"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d\": rpc error: code = NotFound desc = could not find container \"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d\": container with ID starting with 178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.476314 4804 scope.go:117] "RemoveContainer" containerID="3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.476612 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e"} err="failed to get container status \"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\": rpc error: code = NotFound desc = could not find container \"3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e\": container with ID starting with 3a22d3d8bbbd4180b9f68f81f93a94e89b6710020577f3a605ccb7b670f7de9e not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.476637 4804 scope.go:117] "RemoveContainer" containerID="895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.476922 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce"} err="failed to get container status \"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\": rpc error: code = NotFound desc = could not find container \"895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce\": container with ID starting with 895b0ae7730e3841ed6ff83dfbf2a3b2bb266166bf48e7767583e8cefc7cbbce not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.476944 4804 scope.go:117] "RemoveContainer" containerID="a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.477184 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18"} err="failed to get container status \"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\": rpc error: code = NotFound desc = could not find container \"a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18\": container with ID starting with a8d2fd659915715e4b930820955bc02bf595275cfa6d478de8e308d986080a18 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.477206 4804 scope.go:117] "RemoveContainer" containerID="035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.477532 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897"} err="failed to get container status \"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\": rpc error: code = NotFound desc = could not find container \"035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897\": container with ID starting with 035126b0b682ea20cc449256c6a96bdd0197da9bc5cfbec888714286074e0897 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.477558 4804 scope.go:117] "RemoveContainer" containerID="12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.477937 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c"} err="failed to get container status \"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\": rpc error: code = NotFound desc = could not find container \"12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c\": container with ID starting with 12bf560462a407c40646280d17832797b30750b6bb854db9f45f0b841103d24c not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.477964 4804 scope.go:117] "RemoveContainer" containerID="d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.478315 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e"} err="failed to get container status \"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\": rpc error: code = NotFound desc = could not find container \"d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e\": container with ID starting with d8da02a5af1d28ef89e1cd816e359bb79820a7cde0a2a6157e90cfe64092ee5e not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.478336 4804 scope.go:117] "RemoveContainer" containerID="3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.478755 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc"} err="failed to get container status \"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\": rpc error: code = NotFound desc = could not find container \"3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc\": container with ID starting with 3c7e0af5565d17b6730f7bd841100fb7c9440249c3a94baf843b4989669bc7dc not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.478773 4804 scope.go:117] "RemoveContainer" containerID="e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.479026 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6"} err="failed to get container status \"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\": rpc error: code = NotFound desc = could not find container \"e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6\": container with ID starting with e00961024d7deb9ca1b50bd28dd113cb13bd262da0bfd38bd647d5f9a5ac9fe6 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.479044 4804 scope.go:117] "RemoveContainer" containerID="048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.479322 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03"} err="failed to get container status \"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\": rpc error: code = NotFound desc = could not find container \"048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03\": container with ID starting with 048fd103b2f4f21c4fa63cc5dcb2a671ea07672acb6e037999082deada6cea03 not found: ID does not exist" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.479386 4804 scope.go:117] "RemoveContainer" containerID="178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d" Jan 28 11:33:25 crc kubenswrapper[4804]: I0128 11:33:25.479699 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d"} err="failed to get container status \"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d\": rpc error: code = NotFound desc = could not find container \"178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d\": container with ID starting with 178385da9ffb037e99e346443a87ffd09fdcd348dd8946f9797fa00771ca1a3d not found: ID does not exist" Jan 28 11:33:26 crc kubenswrapper[4804]: I0128 11:33:26.195847 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" event={"ID":"241322ad-bbc4-487d-9bd6-58659d5b9882","Type":"ContainerStarted","Data":"55ecfab67e305dc5b6e7f4356decbf27f746c94f1e55de297c28a2cb996f7115"} Jan 28 11:33:26 crc kubenswrapper[4804]: I0128 11:33:26.196327 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" event={"ID":"241322ad-bbc4-487d-9bd6-58659d5b9882","Type":"ContainerStarted","Data":"c0bc493ac5246614bb4596907df43ca5dc092f99c052343d34e82e143d947a3e"} Jan 28 11:33:26 crc kubenswrapper[4804]: I0128 11:33:26.196341 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" event={"ID":"241322ad-bbc4-487d-9bd6-58659d5b9882","Type":"ContainerStarted","Data":"d363d36ca9b8f11ef135fc30d9a6046a4ad1675b73b29f09f3fd652a4e8f08fb"} Jan 28 11:33:26 crc kubenswrapper[4804]: I0128 11:33:26.196349 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" event={"ID":"241322ad-bbc4-487d-9bd6-58659d5b9882","Type":"ContainerStarted","Data":"755b8db2608bc72501aaa8b2ba24273cfbab497f28f77e54eae374aba0bf6124"} Jan 28 11:33:26 crc kubenswrapper[4804]: I0128 11:33:26.196358 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" event={"ID":"241322ad-bbc4-487d-9bd6-58659d5b9882","Type":"ContainerStarted","Data":"d960bb6363b20e2aedc4aefbc0776a728e94f3965222c680f884277aeda30e09"} Jan 28 11:33:26 crc kubenswrapper[4804]: I0128 11:33:26.196366 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" event={"ID":"241322ad-bbc4-487d-9bd6-58659d5b9882","Type":"ContainerStarted","Data":"d6891089a1bb32eb1a00b1778f8b61aef0857d487f368d4562a1e82f83e797b5"} Jan 28 11:33:26 crc kubenswrapper[4804]: I0128 11:33:26.922402 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="686039c6-ae16-45ac-bb9f-4c39d57d6c80" path="/var/lib/kubelet/pods/686039c6-ae16-45ac-bb9f-4c39d57d6c80/volumes" Jan 28 11:33:29 crc kubenswrapper[4804]: I0128 11:33:29.220154 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" event={"ID":"241322ad-bbc4-487d-9bd6-58659d5b9882","Type":"ContainerStarted","Data":"ee57ccad0a12e421234adaa104af5b6c9040b0177132f7b4d9e9dc24b35db9d6"} Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.848111 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["crc-storage/crc-storage-crc-g7rhm"] Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.849138 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.851642 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"kube-root-ca.crt" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.851824 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"openshift-service-ca.crt" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.852175 4804 reflector.go:368] Caches populated for *v1.Secret from object-"crc-storage"/"crc-storage-dockercfg-j98lp" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.852407 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"crc-storage"/"crc-storage" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.894733 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-crc-storage\") pod \"crc-storage-crc-g7rhm\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.894780 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prwlx\" (UniqueName: \"kubernetes.io/projected/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-kube-api-access-prwlx\") pod \"crc-storage-crc-g7rhm\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.894827 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-node-mnt\") pod \"crc-storage-crc-g7rhm\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.995450 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-node-mnt\") pod \"crc-storage-crc-g7rhm\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.995547 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-crc-storage\") pod \"crc-storage-crc-g7rhm\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.995571 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prwlx\" (UniqueName: \"kubernetes.io/projected/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-kube-api-access-prwlx\") pod \"crc-storage-crc-g7rhm\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.996217 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-node-mnt\") pod \"crc-storage-crc-g7rhm\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:30 crc kubenswrapper[4804]: I0128 11:33:30.997204 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-crc-storage\") pod \"crc-storage-crc-g7rhm\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.015343 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prwlx\" (UniqueName: \"kubernetes.io/projected/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-kube-api-access-prwlx\") pod \"crc-storage-crc-g7rhm\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.168829 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.189793 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-g7rhm"] Jan 28 11:33:31 crc kubenswrapper[4804]: E0128 11:33:31.212966 4804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(877dcc620883edf43c3f6bb78b1c90529bbe1516d03e9847a916cb2aa13817ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 11:33:31 crc kubenswrapper[4804]: E0128 11:33:31.213136 4804 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(877dcc620883edf43c3f6bb78b1c90529bbe1516d03e9847a916cb2aa13817ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:31 crc kubenswrapper[4804]: E0128 11:33:31.213167 4804 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(877dcc620883edf43c3f6bb78b1c90529bbe1516d03e9847a916cb2aa13817ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:31 crc kubenswrapper[4804]: E0128 11:33:31.213296 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-g7rhm_crc-storage(2682d435-ca9a-4a86-ba99-c4dd6e59a5f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-g7rhm_crc-storage(2682d435-ca9a-4a86-ba99-c4dd6e59a5f5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(877dcc620883edf43c3f6bb78b1c90529bbe1516d03e9847a916cb2aa13817ca): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-g7rhm" podUID="2682d435-ca9a-4a86-ba99-c4dd6e59a5f5" Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.234524 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" event={"ID":"241322ad-bbc4-487d-9bd6-58659d5b9882","Type":"ContainerStarted","Data":"f5fc1178ecb5956f32ca89dd1eec5158503b804438c1f3c9066dfa6487876bb8"} Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.234546 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.234902 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.235189 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.235231 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.235657 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.267966 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:31 crc kubenswrapper[4804]: E0128 11:33:31.273656 4804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(39ffd1e42142b8265eb8a0cb78a1108d03b47aac64d8a617402a424c7fc4ce44): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 11:33:31 crc kubenswrapper[4804]: E0128 11:33:31.273713 4804 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(39ffd1e42142b8265eb8a0cb78a1108d03b47aac64d8a617402a424c7fc4ce44): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:31 crc kubenswrapper[4804]: E0128 11:33:31.273733 4804 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(39ffd1e42142b8265eb8a0cb78a1108d03b47aac64d8a617402a424c7fc4ce44): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:31 crc kubenswrapper[4804]: E0128 11:33:31.273771 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-g7rhm_crc-storage(2682d435-ca9a-4a86-ba99-c4dd6e59a5f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-g7rhm_crc-storage(2682d435-ca9a-4a86-ba99-c4dd6e59a5f5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(39ffd1e42142b8265eb8a0cb78a1108d03b47aac64d8a617402a424c7fc4ce44): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-g7rhm" podUID="2682d435-ca9a-4a86-ba99-c4dd6e59a5f5" Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.277761 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:31 crc kubenswrapper[4804]: I0128 11:33:31.285992 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" podStartSLOduration=7.285960839 podStartE2EDuration="7.285960839s" podCreationTimestamp="2026-01-28 11:33:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:33:31.275429028 +0000 UTC m=+687.070309012" watchObservedRunningTime="2026-01-28 11:33:31.285960839 +0000 UTC m=+687.080840863" Jan 28 11:33:36 crc kubenswrapper[4804]: I0128 11:33:36.918634 4804 scope.go:117] "RemoveContainer" containerID="c01bb0098ca9990666b7c354aacae06dac49b570cdc5308064b10a0988abe4cb" Jan 28 11:33:36 crc kubenswrapper[4804]: E0128 11:33:36.921626 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-lqqmt_openshift-multus(735b7edc-6f8b-4f5f-a9ca-11964dd78266)\"" pod="openshift-multus/multus-lqqmt" podUID="735b7edc-6f8b-4f5f-a9ca-11964dd78266" Jan 28 11:33:41 crc kubenswrapper[4804]: I0128 11:33:41.915204 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:41 crc kubenswrapper[4804]: I0128 11:33:41.916011 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:41 crc kubenswrapper[4804]: E0128 11:33:41.948795 4804 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(9133d83afa5c0ee89faaca901c4de75862163d34e3b94ea13649948a06a781a3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 28 11:33:41 crc kubenswrapper[4804]: E0128 11:33:41.948906 4804 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(9133d83afa5c0ee89faaca901c4de75862163d34e3b94ea13649948a06a781a3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:41 crc kubenswrapper[4804]: E0128 11:33:41.948930 4804 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(9133d83afa5c0ee89faaca901c4de75862163d34e3b94ea13649948a06a781a3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:41 crc kubenswrapper[4804]: E0128 11:33:41.948977 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"crc-storage-crc-g7rhm_crc-storage(2682d435-ca9a-4a86-ba99-c4dd6e59a5f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"crc-storage-crc-g7rhm_crc-storage(2682d435-ca9a-4a86-ba99-c4dd6e59a5f5)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_crc-storage-crc-g7rhm_crc-storage_2682d435-ca9a-4a86-ba99-c4dd6e59a5f5_0(9133d83afa5c0ee89faaca901c4de75862163d34e3b94ea13649948a06a781a3): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="crc-storage/crc-storage-crc-g7rhm" podUID="2682d435-ca9a-4a86-ba99-c4dd6e59a5f5" Jan 28 11:33:42 crc kubenswrapper[4804]: I0128 11:33:42.582316 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:33:42 crc kubenswrapper[4804]: I0128 11:33:42.582675 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:33:48 crc kubenswrapper[4804]: I0128 11:33:48.914745 4804 scope.go:117] "RemoveContainer" containerID="c01bb0098ca9990666b7c354aacae06dac49b570cdc5308064b10a0988abe4cb" Jan 28 11:33:49 crc kubenswrapper[4804]: I0128 11:33:49.337489 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqqmt_735b7edc-6f8b-4f5f-a9ca-11964dd78266/kube-multus/2.log" Jan 28 11:33:49 crc kubenswrapper[4804]: I0128 11:33:49.338132 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqqmt_735b7edc-6f8b-4f5f-a9ca-11964dd78266/kube-multus/1.log" Jan 28 11:33:49 crc kubenswrapper[4804]: I0128 11:33:49.338180 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-lqqmt" event={"ID":"735b7edc-6f8b-4f5f-a9ca-11964dd78266","Type":"ContainerStarted","Data":"22513089ed214da21f747da0505b2509c9785cf6745ef9c501eae0f5493cb868"} Jan 28 11:33:54 crc kubenswrapper[4804]: I0128 11:33:54.990736 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6qqcq" Jan 28 11:33:55 crc kubenswrapper[4804]: I0128 11:33:55.914246 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:55 crc kubenswrapper[4804]: I0128 11:33:55.914652 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:56 crc kubenswrapper[4804]: I0128 11:33:56.095846 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["crc-storage/crc-storage-crc-g7rhm"] Jan 28 11:33:56 crc kubenswrapper[4804]: I0128 11:33:56.103776 4804 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 11:33:56 crc kubenswrapper[4804]: I0128 11:33:56.385028 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-g7rhm" event={"ID":"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5","Type":"ContainerStarted","Data":"5155fcfd0208a1f326202e74385175543df4236cb8aaf7939b68b4fedfc0f2e6"} Jan 28 11:33:57 crc kubenswrapper[4804]: I0128 11:33:57.392027 4804 generic.go:334] "Generic (PLEG): container finished" podID="2682d435-ca9a-4a86-ba99-c4dd6e59a5f5" containerID="8bb0035f4e5fd8a32c41341d09445e829b7957527742ab3149892f6f0d0302e0" exitCode=0 Jan 28 11:33:57 crc kubenswrapper[4804]: I0128 11:33:57.392099 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-g7rhm" event={"ID":"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5","Type":"ContainerDied","Data":"8bb0035f4e5fd8a32c41341d09445e829b7957527742ab3149892f6f0d0302e0"} Jan 28 11:33:58 crc kubenswrapper[4804]: I0128 11:33:58.663539 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:33:58 crc kubenswrapper[4804]: I0128 11:33:58.785682 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-crc-storage\") pod \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " Jan 28 11:33:58 crc kubenswrapper[4804]: I0128 11:33:58.785724 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-node-mnt\") pod \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " Jan 28 11:33:58 crc kubenswrapper[4804]: I0128 11:33:58.785784 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-prwlx\" (UniqueName: \"kubernetes.io/projected/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-kube-api-access-prwlx\") pod \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\" (UID: \"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5\") " Jan 28 11:33:58 crc kubenswrapper[4804]: I0128 11:33:58.786060 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-node-mnt" (OuterVolumeSpecName: "node-mnt") pod "2682d435-ca9a-4a86-ba99-c4dd6e59a5f5" (UID: "2682d435-ca9a-4a86-ba99-c4dd6e59a5f5"). InnerVolumeSpecName "node-mnt". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:33:58 crc kubenswrapper[4804]: I0128 11:33:58.786531 4804 reconciler_common.go:293] "Volume detached for volume \"node-mnt\" (UniqueName: \"kubernetes.io/host-path/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-node-mnt\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:58 crc kubenswrapper[4804]: I0128 11:33:58.793510 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-kube-api-access-prwlx" (OuterVolumeSpecName: "kube-api-access-prwlx") pod "2682d435-ca9a-4a86-ba99-c4dd6e59a5f5" (UID: "2682d435-ca9a-4a86-ba99-c4dd6e59a5f5"). InnerVolumeSpecName "kube-api-access-prwlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:33:58 crc kubenswrapper[4804]: I0128 11:33:58.806444 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-crc-storage" (OuterVolumeSpecName: "crc-storage") pod "2682d435-ca9a-4a86-ba99-c4dd6e59a5f5" (UID: "2682d435-ca9a-4a86-ba99-c4dd6e59a5f5"). InnerVolumeSpecName "crc-storage". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:33:58 crc kubenswrapper[4804]: I0128 11:33:58.887436 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-prwlx\" (UniqueName: \"kubernetes.io/projected/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-kube-api-access-prwlx\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:58 crc kubenswrapper[4804]: I0128 11:33:58.887471 4804 reconciler_common.go:293] "Volume detached for volume \"crc-storage\" (UniqueName: \"kubernetes.io/configmap/2682d435-ca9a-4a86-ba99-c4dd6e59a5f5-crc-storage\") on node \"crc\" DevicePath \"\"" Jan 28 11:33:59 crc kubenswrapper[4804]: I0128 11:33:59.404957 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="crc-storage/crc-storage-crc-g7rhm" event={"ID":"2682d435-ca9a-4a86-ba99-c4dd6e59a5f5","Type":"ContainerDied","Data":"5155fcfd0208a1f326202e74385175543df4236cb8aaf7939b68b4fedfc0f2e6"} Jan 28 11:33:59 crc kubenswrapper[4804]: I0128 11:33:59.405003 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5155fcfd0208a1f326202e74385175543df4236cb8aaf7939b68b4fedfc0f2e6" Jan 28 11:33:59 crc kubenswrapper[4804]: I0128 11:33:59.405012 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="crc-storage/crc-storage-crc-g7rhm" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.529097 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc"] Jan 28 11:34:07 crc kubenswrapper[4804]: E0128 11:34:07.529929 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2682d435-ca9a-4a86-ba99-c4dd6e59a5f5" containerName="storage" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.529943 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="2682d435-ca9a-4a86-ba99-c4dd6e59a5f5" containerName="storage" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.530065 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="2682d435-ca9a-4a86-ba99-c4dd6e59a5f5" containerName="storage" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.530944 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.532790 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.539462 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc"] Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.589765 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.590152 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.590180 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-42kbt\" (UniqueName: \"kubernetes.io/projected/1622f571-d0d6-4247-b47e-4dda08dea3b3-kube-api-access-42kbt\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.691873 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.691956 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.691985 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-42kbt\" (UniqueName: \"kubernetes.io/projected/1622f571-d0d6-4247-b47e-4dda08dea3b3-kube-api-access-42kbt\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.692374 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-bundle\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.692434 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-util\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.715105 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-42kbt\" (UniqueName: \"kubernetes.io/projected/1622f571-d0d6-4247-b47e-4dda08dea3b3-kube-api-access-42kbt\") pod \"53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:07 crc kubenswrapper[4804]: I0128 11:34:07.853966 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:08 crc kubenswrapper[4804]: I0128 11:34:08.272802 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc"] Jan 28 11:34:08 crc kubenswrapper[4804]: W0128 11:34:08.280068 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1622f571_d0d6_4247_b47e_4dda08dea3b3.slice/crio-41da7abec0cee37dd6970a68064893f12dd62c68cd032c340f7495325cdb1106 WatchSource:0}: Error finding container 41da7abec0cee37dd6970a68064893f12dd62c68cd032c340f7495325cdb1106: Status 404 returned error can't find the container with id 41da7abec0cee37dd6970a68064893f12dd62c68cd032c340f7495325cdb1106 Jan 28 11:34:08 crc kubenswrapper[4804]: I0128 11:34:08.451145 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" event={"ID":"1622f571-d0d6-4247-b47e-4dda08dea3b3","Type":"ContainerStarted","Data":"41da7abec0cee37dd6970a68064893f12dd62c68cd032c340f7495325cdb1106"} Jan 28 11:34:09 crc kubenswrapper[4804]: I0128 11:34:09.458288 4804 generic.go:334] "Generic (PLEG): container finished" podID="1622f571-d0d6-4247-b47e-4dda08dea3b3" containerID="e047fdc2cf23333cb90977f35d7c25de83d795b4da978c00a4770e83e68a278d" exitCode=0 Jan 28 11:34:09 crc kubenswrapper[4804]: I0128 11:34:09.458326 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" event={"ID":"1622f571-d0d6-4247-b47e-4dda08dea3b3","Type":"ContainerDied","Data":"e047fdc2cf23333cb90977f35d7c25de83d795b4da978c00a4770e83e68a278d"} Jan 28 11:34:11 crc kubenswrapper[4804]: I0128 11:34:11.476687 4804 generic.go:334] "Generic (PLEG): container finished" podID="1622f571-d0d6-4247-b47e-4dda08dea3b3" containerID="b060a5c852789a2a12f3919e3783e22e4e12a30fe3f6d50bb9348c0d1cbbf2c3" exitCode=0 Jan 28 11:34:11 crc kubenswrapper[4804]: I0128 11:34:11.477113 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" event={"ID":"1622f571-d0d6-4247-b47e-4dda08dea3b3","Type":"ContainerDied","Data":"b060a5c852789a2a12f3919e3783e22e4e12a30fe3f6d50bb9348c0d1cbbf2c3"} Jan 28 11:34:12 crc kubenswrapper[4804]: I0128 11:34:12.484869 4804 generic.go:334] "Generic (PLEG): container finished" podID="1622f571-d0d6-4247-b47e-4dda08dea3b3" containerID="9b1c121b4786bda80a07c19c69fdefec554a2bf786e53da947c28d643e02ab69" exitCode=0 Jan 28 11:34:12 crc kubenswrapper[4804]: I0128 11:34:12.484986 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" event={"ID":"1622f571-d0d6-4247-b47e-4dda08dea3b3","Type":"ContainerDied","Data":"9b1c121b4786bda80a07c19c69fdefec554a2bf786e53da947c28d643e02ab69"} Jan 28 11:34:12 crc kubenswrapper[4804]: I0128 11:34:12.582655 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:34:12 crc kubenswrapper[4804]: I0128 11:34:12.582877 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:34:13 crc kubenswrapper[4804]: I0128 11:34:13.756665 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:13 crc kubenswrapper[4804]: I0128 11:34:13.876953 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-util\") pod \"1622f571-d0d6-4247-b47e-4dda08dea3b3\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " Jan 28 11:34:13 crc kubenswrapper[4804]: I0128 11:34:13.877665 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-42kbt\" (UniqueName: \"kubernetes.io/projected/1622f571-d0d6-4247-b47e-4dda08dea3b3-kube-api-access-42kbt\") pod \"1622f571-d0d6-4247-b47e-4dda08dea3b3\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " Jan 28 11:34:13 crc kubenswrapper[4804]: I0128 11:34:13.877819 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-bundle\") pod \"1622f571-d0d6-4247-b47e-4dda08dea3b3\" (UID: \"1622f571-d0d6-4247-b47e-4dda08dea3b3\") " Jan 28 11:34:13 crc kubenswrapper[4804]: I0128 11:34:13.878500 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-bundle" (OuterVolumeSpecName: "bundle") pod "1622f571-d0d6-4247-b47e-4dda08dea3b3" (UID: "1622f571-d0d6-4247-b47e-4dda08dea3b3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:34:13 crc kubenswrapper[4804]: I0128 11:34:13.886048 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1622f571-d0d6-4247-b47e-4dda08dea3b3-kube-api-access-42kbt" (OuterVolumeSpecName: "kube-api-access-42kbt") pod "1622f571-d0d6-4247-b47e-4dda08dea3b3" (UID: "1622f571-d0d6-4247-b47e-4dda08dea3b3"). InnerVolumeSpecName "kube-api-access-42kbt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:34:13 crc kubenswrapper[4804]: I0128 11:34:13.979271 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-42kbt\" (UniqueName: \"kubernetes.io/projected/1622f571-d0d6-4247-b47e-4dda08dea3b3-kube-api-access-42kbt\") on node \"crc\" DevicePath \"\"" Jan 28 11:34:13 crc kubenswrapper[4804]: I0128 11:34:13.979328 4804 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:34:14 crc kubenswrapper[4804]: I0128 11:34:14.074369 4804 scope.go:117] "RemoveContainer" containerID="888abd8066feec1a58a78cfc0c77f1634db2fc87ed5237703a224ace3d78ee8d" Jan 28 11:34:14 crc kubenswrapper[4804]: I0128 11:34:14.188952 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-util" (OuterVolumeSpecName: "util") pod "1622f571-d0d6-4247-b47e-4dda08dea3b3" (UID: "1622f571-d0d6-4247-b47e-4dda08dea3b3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:34:14 crc kubenswrapper[4804]: I0128 11:34:14.283736 4804 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1622f571-d0d6-4247-b47e-4dda08dea3b3-util\") on node \"crc\" DevicePath \"\"" Jan 28 11:34:14 crc kubenswrapper[4804]: I0128 11:34:14.502580 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-lqqmt_735b7edc-6f8b-4f5f-a9ca-11964dd78266/kube-multus/2.log" Jan 28 11:34:14 crc kubenswrapper[4804]: I0128 11:34:14.505161 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" event={"ID":"1622f571-d0d6-4247-b47e-4dda08dea3b3","Type":"ContainerDied","Data":"41da7abec0cee37dd6970a68064893f12dd62c68cd032c340f7495325cdb1106"} Jan 28 11:34:14 crc kubenswrapper[4804]: I0128 11:34:14.505205 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41da7abec0cee37dd6970a68064893f12dd62c68cd032c340f7495325cdb1106" Jan 28 11:34:14 crc kubenswrapper[4804]: I0128 11:34:14.505265 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.212589 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-hzhkh"] Jan 28 11:34:16 crc kubenswrapper[4804]: E0128 11:34:16.213035 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1622f571-d0d6-4247-b47e-4dda08dea3b3" containerName="pull" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.213047 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="1622f571-d0d6-4247-b47e-4dda08dea3b3" containerName="pull" Jan 28 11:34:16 crc kubenswrapper[4804]: E0128 11:34:16.213063 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1622f571-d0d6-4247-b47e-4dda08dea3b3" containerName="util" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.213070 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="1622f571-d0d6-4247-b47e-4dda08dea3b3" containerName="util" Jan 28 11:34:16 crc kubenswrapper[4804]: E0128 11:34:16.213083 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1622f571-d0d6-4247-b47e-4dda08dea3b3" containerName="extract" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.213088 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="1622f571-d0d6-4247-b47e-4dda08dea3b3" containerName="extract" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.213203 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="1622f571-d0d6-4247-b47e-4dda08dea3b3" containerName="extract" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.213533 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-hzhkh" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.215980 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.218366 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-rtnpc" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.219423 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.223457 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-hzhkh"] Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.309389 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bb4kr\" (UniqueName: \"kubernetes.io/projected/d478ae3c-a9f5-4f6e-ae30-1bd80027de73-kube-api-access-bb4kr\") pod \"nmstate-operator-646758c888-hzhkh\" (UID: \"d478ae3c-a9f5-4f6e-ae30-1bd80027de73\") " pod="openshift-nmstate/nmstate-operator-646758c888-hzhkh" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.410946 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bb4kr\" (UniqueName: \"kubernetes.io/projected/d478ae3c-a9f5-4f6e-ae30-1bd80027de73-kube-api-access-bb4kr\") pod \"nmstate-operator-646758c888-hzhkh\" (UID: \"d478ae3c-a9f5-4f6e-ae30-1bd80027de73\") " pod="openshift-nmstate/nmstate-operator-646758c888-hzhkh" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.432863 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bb4kr\" (UniqueName: \"kubernetes.io/projected/d478ae3c-a9f5-4f6e-ae30-1bd80027de73-kube-api-access-bb4kr\") pod \"nmstate-operator-646758c888-hzhkh\" (UID: \"d478ae3c-a9f5-4f6e-ae30-1bd80027de73\") " pod="openshift-nmstate/nmstate-operator-646758c888-hzhkh" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.530423 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-646758c888-hzhkh" Jan 28 11:34:16 crc kubenswrapper[4804]: I0128 11:34:16.760516 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-646758c888-hzhkh"] Jan 28 11:34:17 crc kubenswrapper[4804]: I0128 11:34:17.521255 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-hzhkh" event={"ID":"d478ae3c-a9f5-4f6e-ae30-1bd80027de73","Type":"ContainerStarted","Data":"79d10a5e966e971eb72dbe65902340a083978afb3741f0bf00cfd4f0a6668320"} Jan 28 11:34:19 crc kubenswrapper[4804]: I0128 11:34:19.531545 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-646758c888-hzhkh" event={"ID":"d478ae3c-a9f5-4f6e-ae30-1bd80027de73","Type":"ContainerStarted","Data":"ecfbb35b9160d7adb86b3e4795f3b4b95e278e5bd877832ca2eaa5e102f0211c"} Jan 28 11:34:19 crc kubenswrapper[4804]: I0128 11:34:19.550191 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-646758c888-hzhkh" podStartSLOduration=1.3630722259999999 podStartE2EDuration="3.550173086s" podCreationTimestamp="2026-01-28 11:34:16 +0000 UTC" firstStartedPulling="2026-01-28 11:34:16.774066391 +0000 UTC m=+732.568946375" lastFinishedPulling="2026-01-28 11:34:18.961167261 +0000 UTC m=+734.756047235" observedRunningTime="2026-01-28 11:34:19.546065081 +0000 UTC m=+735.340945065" watchObservedRunningTime="2026-01-28 11:34:19.550173086 +0000 UTC m=+735.345053070" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.193079 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-b2pq8"] Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.194601 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-b2pq8" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.197405 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-rs8h2" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.205654 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-b2pq8"] Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.231744 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z"] Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.232914 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.234823 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.236980 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-r6vm7"] Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.254238 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spnc8\" (UniqueName: \"kubernetes.io/projected/b63500d6-29e0-4eef-82cd-fdc0036ef0f2-kube-api-access-spnc8\") pod \"nmstate-metrics-54757c584b-b2pq8\" (UID: \"b63500d6-29e0-4eef-82cd-fdc0036ef0f2\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-b2pq8" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.254339 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl2w5\" (UniqueName: \"kubernetes.io/projected/c17b2105-0264-4cf3-8204-e68ba577728e-kube-api-access-pl2w5\") pod \"nmstate-webhook-8474b5b9d8-c5t8z\" (UID: \"c17b2105-0264-4cf3-8204-e68ba577728e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.254375 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c17b2105-0264-4cf3-8204-e68ba577728e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-c5t8z\" (UID: \"c17b2105-0264-4cf3-8204-e68ba577728e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.262007 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z"] Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.262118 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.313219 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52"] Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.313870 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.316248 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-jbssz" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.316480 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.316524 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.322833 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52"] Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.355692 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-spnc8\" (UniqueName: \"kubernetes.io/projected/b63500d6-29e0-4eef-82cd-fdc0036ef0f2-kube-api-access-spnc8\") pod \"nmstate-metrics-54757c584b-b2pq8\" (UID: \"b63500d6-29e0-4eef-82cd-fdc0036ef0f2\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-b2pq8" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.355744 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/77313f93-489e-4da6-81bb-eec0c795e242-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-bbn52\" (UID: \"77313f93-489e-4da6-81bb-eec0c795e242\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.355782 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bkh84\" (UniqueName: \"kubernetes.io/projected/a741d157-784a-4e3e-9e35-200d91f3aa47-kube-api-access-bkh84\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.355810 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a741d157-784a-4e3e-9e35-200d91f3aa47-ovs-socket\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.355855 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a741d157-784a-4e3e-9e35-200d91f3aa47-nmstate-lock\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.355969 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zh7d\" (UniqueName: \"kubernetes.io/projected/77313f93-489e-4da6-81bb-eec0c795e242-kube-api-access-4zh7d\") pod \"nmstate-console-plugin-7754f76f8b-bbn52\" (UID: \"77313f93-489e-4da6-81bb-eec0c795e242\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.356013 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/77313f93-489e-4da6-81bb-eec0c795e242-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-bbn52\" (UID: \"77313f93-489e-4da6-81bb-eec0c795e242\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.356059 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a741d157-784a-4e3e-9e35-200d91f3aa47-dbus-socket\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.356090 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pl2w5\" (UniqueName: \"kubernetes.io/projected/c17b2105-0264-4cf3-8204-e68ba577728e-kube-api-access-pl2w5\") pod \"nmstate-webhook-8474b5b9d8-c5t8z\" (UID: \"c17b2105-0264-4cf3-8204-e68ba577728e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.356139 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c17b2105-0264-4cf3-8204-e68ba577728e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-c5t8z\" (UID: \"c17b2105-0264-4cf3-8204-e68ba577728e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" Jan 28 11:34:27 crc kubenswrapper[4804]: E0128 11:34:27.356246 4804 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Jan 28 11:34:27 crc kubenswrapper[4804]: E0128 11:34:27.356290 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c17b2105-0264-4cf3-8204-e68ba577728e-tls-key-pair podName:c17b2105-0264-4cf3-8204-e68ba577728e nodeName:}" failed. No retries permitted until 2026-01-28 11:34:27.856273704 +0000 UTC m=+743.651153688 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/c17b2105-0264-4cf3-8204-e68ba577728e-tls-key-pair") pod "nmstate-webhook-8474b5b9d8-c5t8z" (UID: "c17b2105-0264-4cf3-8204-e68ba577728e") : secret "openshift-nmstate-webhook" not found Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.374637 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pl2w5\" (UniqueName: \"kubernetes.io/projected/c17b2105-0264-4cf3-8204-e68ba577728e-kube-api-access-pl2w5\") pod \"nmstate-webhook-8474b5b9d8-c5t8z\" (UID: \"c17b2105-0264-4cf3-8204-e68ba577728e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.379323 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-spnc8\" (UniqueName: \"kubernetes.io/projected/b63500d6-29e0-4eef-82cd-fdc0036ef0f2-kube-api-access-spnc8\") pod \"nmstate-metrics-54757c584b-b2pq8\" (UID: \"b63500d6-29e0-4eef-82cd-fdc0036ef0f2\") " pod="openshift-nmstate/nmstate-metrics-54757c584b-b2pq8" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.457001 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a741d157-784a-4e3e-9e35-200d91f3aa47-ovs-socket\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.457075 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a741d157-784a-4e3e-9e35-200d91f3aa47-nmstate-lock\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.457103 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zh7d\" (UniqueName: \"kubernetes.io/projected/77313f93-489e-4da6-81bb-eec0c795e242-kube-api-access-4zh7d\") pod \"nmstate-console-plugin-7754f76f8b-bbn52\" (UID: \"77313f93-489e-4da6-81bb-eec0c795e242\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.457126 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/77313f93-489e-4da6-81bb-eec0c795e242-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-bbn52\" (UID: \"77313f93-489e-4da6-81bb-eec0c795e242\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.457158 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a741d157-784a-4e3e-9e35-200d91f3aa47-dbus-socket\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.457155 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/a741d157-784a-4e3e-9e35-200d91f3aa47-ovs-socket\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.457180 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/a741d157-784a-4e3e-9e35-200d91f3aa47-nmstate-lock\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.457224 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/77313f93-489e-4da6-81bb-eec0c795e242-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-bbn52\" (UID: \"77313f93-489e-4da6-81bb-eec0c795e242\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.457549 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bkh84\" (UniqueName: \"kubernetes.io/projected/a741d157-784a-4e3e-9e35-200d91f3aa47-kube-api-access-bkh84\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: E0128 11:34:27.457311 4804 secret.go:188] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.457604 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/a741d157-784a-4e3e-9e35-200d91f3aa47-dbus-socket\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: E0128 11:34:27.457631 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/77313f93-489e-4da6-81bb-eec0c795e242-plugin-serving-cert podName:77313f93-489e-4da6-81bb-eec0c795e242 nodeName:}" failed. No retries permitted until 2026-01-28 11:34:27.957612037 +0000 UTC m=+743.752492021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/77313f93-489e-4da6-81bb-eec0c795e242-plugin-serving-cert") pod "nmstate-console-plugin-7754f76f8b-bbn52" (UID: "77313f93-489e-4da6-81bb-eec0c795e242") : secret "plugin-serving-cert" not found Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.458068 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/77313f93-489e-4da6-81bb-eec0c795e242-nginx-conf\") pod \"nmstate-console-plugin-7754f76f8b-bbn52\" (UID: \"77313f93-489e-4da6-81bb-eec0c795e242\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.476527 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zh7d\" (UniqueName: \"kubernetes.io/projected/77313f93-489e-4da6-81bb-eec0c795e242-kube-api-access-4zh7d\") pod \"nmstate-console-plugin-7754f76f8b-bbn52\" (UID: \"77313f93-489e-4da6-81bb-eec0c795e242\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.477951 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bkh84\" (UniqueName: \"kubernetes.io/projected/a741d157-784a-4e3e-9e35-200d91f3aa47-kube-api-access-bkh84\") pod \"nmstate-handler-r6vm7\" (UID: \"a741d157-784a-4e3e-9e35-200d91f3aa47\") " pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.511482 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-54757c584b-b2pq8" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.525343 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5d7d54b946-gb592"] Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.526170 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.539030 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d7d54b946-gb592"] Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.558882 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-service-ca\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.558958 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zvng\" (UniqueName: \"kubernetes.io/projected/411c17ba-96e6-4688-965a-16f19ebbdcec-kube-api-access-9zvng\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.558986 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/411c17ba-96e6-4688-965a-16f19ebbdcec-console-oauth-config\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.559011 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-oauth-serving-cert\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.559034 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-console-config\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.559067 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/411c17ba-96e6-4688-965a-16f19ebbdcec-console-serving-cert\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.559096 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-trusted-ca-bundle\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.588622 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:27 crc kubenswrapper[4804]: W0128 11:34:27.611136 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda741d157_784a_4e3e_9e35_200d91f3aa47.slice/crio-47857daa971eb238234c53495fc121c704b5def049f7084d7fdd38014222043d WatchSource:0}: Error finding container 47857daa971eb238234c53495fc121c704b5def049f7084d7fdd38014222043d: Status 404 returned error can't find the container with id 47857daa971eb238234c53495fc121c704b5def049f7084d7fdd38014222043d Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.660785 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-service-ca\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.660844 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zvng\" (UniqueName: \"kubernetes.io/projected/411c17ba-96e6-4688-965a-16f19ebbdcec-kube-api-access-9zvng\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.660876 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/411c17ba-96e6-4688-965a-16f19ebbdcec-console-oauth-config\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.660924 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-oauth-serving-cert\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.660944 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-console-config\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.661974 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-service-ca\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.662870 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/411c17ba-96e6-4688-965a-16f19ebbdcec-console-serving-cert\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.662977 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-trusted-ca-bundle\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.663842 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-oauth-serving-cert\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.664418 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-console-config\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.665009 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/411c17ba-96e6-4688-965a-16f19ebbdcec-trusted-ca-bundle\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.667564 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/411c17ba-96e6-4688-965a-16f19ebbdcec-console-serving-cert\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.668126 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/411c17ba-96e6-4688-965a-16f19ebbdcec-console-oauth-config\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.680390 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zvng\" (UniqueName: \"kubernetes.io/projected/411c17ba-96e6-4688-965a-16f19ebbdcec-kube-api-access-9zvng\") pod \"console-5d7d54b946-gb592\" (UID: \"411c17ba-96e6-4688-965a-16f19ebbdcec\") " pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.725505 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-54757c584b-b2pq8"] Jan 28 11:34:27 crc kubenswrapper[4804]: W0128 11:34:27.730353 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb63500d6_29e0_4eef_82cd_fdc0036ef0f2.slice/crio-fd52dd1687baaf7cc0b4821de0e58d189c15bcdd192a66cc6cd8e5940504de07 WatchSource:0}: Error finding container fd52dd1687baaf7cc0b4821de0e58d189c15bcdd192a66cc6cd8e5940504de07: Status 404 returned error can't find the container with id fd52dd1687baaf7cc0b4821de0e58d189c15bcdd192a66cc6cd8e5940504de07 Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.866067 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c17b2105-0264-4cf3-8204-e68ba577728e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-c5t8z\" (UID: \"c17b2105-0264-4cf3-8204-e68ba577728e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.869622 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.869996 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/c17b2105-0264-4cf3-8204-e68ba577728e-tls-key-pair\") pod \"nmstate-webhook-8474b5b9d8-c5t8z\" (UID: \"c17b2105-0264-4cf3-8204-e68ba577728e\") " pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.877637 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.967080 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/77313f93-489e-4da6-81bb-eec0c795e242-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-bbn52\" (UID: \"77313f93-489e-4da6-81bb-eec0c795e242\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:27 crc kubenswrapper[4804]: I0128 11:34:27.972503 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/77313f93-489e-4da6-81bb-eec0c795e242-plugin-serving-cert\") pod \"nmstate-console-plugin-7754f76f8b-bbn52\" (UID: \"77313f93-489e-4da6-81bb-eec0c795e242\") " pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:28 crc kubenswrapper[4804]: I0128 11:34:28.151953 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5d7d54b946-gb592"] Jan 28 11:34:28 crc kubenswrapper[4804]: W0128 11:34:28.157035 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod411c17ba_96e6_4688_965a_16f19ebbdcec.slice/crio-5e7b7b8315a7938d527f0ed312364ea2e85da21a43357999f90049cf34bcef8c WatchSource:0}: Error finding container 5e7b7b8315a7938d527f0ed312364ea2e85da21a43357999f90049cf34bcef8c: Status 404 returned error can't find the container with id 5e7b7b8315a7938d527f0ed312364ea2e85da21a43357999f90049cf34bcef8c Jan 28 11:34:28 crc kubenswrapper[4804]: I0128 11:34:28.184266 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z"] Jan 28 11:34:28 crc kubenswrapper[4804]: W0128 11:34:28.185020 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc17b2105_0264_4cf3_8204_e68ba577728e.slice/crio-6b5826c96f96c14bf2ca0eaf7ad140f9d44e767f7e41d2dccc3673aa873bd234 WatchSource:0}: Error finding container 6b5826c96f96c14bf2ca0eaf7ad140f9d44e767f7e41d2dccc3673aa873bd234: Status 404 returned error can't find the container with id 6b5826c96f96c14bf2ca0eaf7ad140f9d44e767f7e41d2dccc3673aa873bd234 Jan 28 11:34:28 crc kubenswrapper[4804]: I0128 11:34:28.233064 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" Jan 28 11:34:28 crc kubenswrapper[4804]: I0128 11:34:28.442455 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52"] Jan 28 11:34:28 crc kubenswrapper[4804]: W0128 11:34:28.453917 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77313f93_489e_4da6_81bb_eec0c795e242.slice/crio-bbcdef9fe789e1e7c4579534e743556e2327187ef6b0a81776ffb7640753d5cb WatchSource:0}: Error finding container bbcdef9fe789e1e7c4579534e743556e2327187ef6b0a81776ffb7640753d5cb: Status 404 returned error can't find the container with id bbcdef9fe789e1e7c4579534e743556e2327187ef6b0a81776ffb7640753d5cb Jan 28 11:34:28 crc kubenswrapper[4804]: I0128 11:34:28.581744 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" event={"ID":"c17b2105-0264-4cf3-8204-e68ba577728e","Type":"ContainerStarted","Data":"6b5826c96f96c14bf2ca0eaf7ad140f9d44e767f7e41d2dccc3673aa873bd234"} Jan 28 11:34:28 crc kubenswrapper[4804]: I0128 11:34:28.583913 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d7d54b946-gb592" event={"ID":"411c17ba-96e6-4688-965a-16f19ebbdcec","Type":"ContainerStarted","Data":"95e5f88d129c9c06bb64df2eddf147aa9fc8153f0d9b27cabd95df7057c34ab2"} Jan 28 11:34:28 crc kubenswrapper[4804]: I0128 11:34:28.583937 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5d7d54b946-gb592" event={"ID":"411c17ba-96e6-4688-965a-16f19ebbdcec","Type":"ContainerStarted","Data":"5e7b7b8315a7938d527f0ed312364ea2e85da21a43357999f90049cf34bcef8c"} Jan 28 11:34:28 crc kubenswrapper[4804]: I0128 11:34:28.585072 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-b2pq8" event={"ID":"b63500d6-29e0-4eef-82cd-fdc0036ef0f2","Type":"ContainerStarted","Data":"fd52dd1687baaf7cc0b4821de0e58d189c15bcdd192a66cc6cd8e5940504de07"} Jan 28 11:34:28 crc kubenswrapper[4804]: I0128 11:34:28.586277 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-r6vm7" event={"ID":"a741d157-784a-4e3e-9e35-200d91f3aa47","Type":"ContainerStarted","Data":"47857daa971eb238234c53495fc121c704b5def049f7084d7fdd38014222043d"} Jan 28 11:34:28 crc kubenswrapper[4804]: I0128 11:34:28.587703 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" event={"ID":"77313f93-489e-4da6-81bb-eec0c795e242","Type":"ContainerStarted","Data":"bbcdef9fe789e1e7c4579534e743556e2327187ef6b0a81776ffb7640753d5cb"} Jan 28 11:34:28 crc kubenswrapper[4804]: I0128 11:34:28.606604 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5d7d54b946-gb592" podStartSLOduration=1.606588575 podStartE2EDuration="1.606588575s" podCreationTimestamp="2026-01-28 11:34:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:34:28.603315188 +0000 UTC m=+744.398195172" watchObservedRunningTime="2026-01-28 11:34:28.606588575 +0000 UTC m=+744.401468559" Jan 28 11:34:31 crc kubenswrapper[4804]: I0128 11:34:31.612294 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" event={"ID":"c17b2105-0264-4cf3-8204-e68ba577728e","Type":"ContainerStarted","Data":"6d66be5e6e8c32470ffd605087adb97daf341cc67035fbb7d6b8127daf872569"} Jan 28 11:34:31 crc kubenswrapper[4804]: I0128 11:34:31.612912 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" Jan 28 11:34:31 crc kubenswrapper[4804]: I0128 11:34:31.615066 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-r6vm7" event={"ID":"a741d157-784a-4e3e-9e35-200d91f3aa47","Type":"ContainerStarted","Data":"5f96900f275a52cd5a01b9936039699dacb96bff3d36eb1ad44430e8232ed64c"} Jan 28 11:34:31 crc kubenswrapper[4804]: I0128 11:34:31.615257 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:31 crc kubenswrapper[4804]: I0128 11:34:31.616686 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-b2pq8" event={"ID":"b63500d6-29e0-4eef-82cd-fdc0036ef0f2","Type":"ContainerStarted","Data":"5b18fe7fcd742562ae9b731d6534622502ea8a4cec123fb757da8420c3864356"} Jan 28 11:34:31 crc kubenswrapper[4804]: I0128 11:34:31.618055 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" event={"ID":"77313f93-489e-4da6-81bb-eec0c795e242","Type":"ContainerStarted","Data":"7665f99a4d0917bbdb5dfb7c4fc2e0de9eba89fd20a8f0355991f8766f1d9ffa"} Jan 28 11:34:31 crc kubenswrapper[4804]: I0128 11:34:31.633106 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" podStartSLOduration=1.714348889 podStartE2EDuration="4.633085641s" podCreationTimestamp="2026-01-28 11:34:27 +0000 UTC" firstStartedPulling="2026-01-28 11:34:28.186944144 +0000 UTC m=+743.981824128" lastFinishedPulling="2026-01-28 11:34:31.105680896 +0000 UTC m=+746.900560880" observedRunningTime="2026-01-28 11:34:31.628774719 +0000 UTC m=+747.423654703" watchObservedRunningTime="2026-01-28 11:34:31.633085641 +0000 UTC m=+747.427965625" Jan 28 11:34:31 crc kubenswrapper[4804]: I0128 11:34:31.674651 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-r6vm7" podStartSLOduration=1.152941929 podStartE2EDuration="4.674632833s" podCreationTimestamp="2026-01-28 11:34:27 +0000 UTC" firstStartedPulling="2026-01-28 11:34:27.612968522 +0000 UTC m=+743.407848506" lastFinishedPulling="2026-01-28 11:34:31.134659426 +0000 UTC m=+746.929539410" observedRunningTime="2026-01-28 11:34:31.671349445 +0000 UTC m=+747.466229429" watchObservedRunningTime="2026-01-28 11:34:31.674632833 +0000 UTC m=+747.469512817" Jan 28 11:34:31 crc kubenswrapper[4804]: I0128 11:34:31.676682 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-7754f76f8b-bbn52" podStartSLOduration=2.029802883 podStartE2EDuration="4.67667506s" podCreationTimestamp="2026-01-28 11:34:27 +0000 UTC" firstStartedPulling="2026-01-28 11:34:28.456490463 +0000 UTC m=+744.251370447" lastFinishedPulling="2026-01-28 11:34:31.10336265 +0000 UTC m=+746.898242624" observedRunningTime="2026-01-28 11:34:31.653257172 +0000 UTC m=+747.448137146" watchObservedRunningTime="2026-01-28 11:34:31.67667506 +0000 UTC m=+747.471555044" Jan 28 11:34:33 crc kubenswrapper[4804]: I0128 11:34:33.632009 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-54757c584b-b2pq8" event={"ID":"b63500d6-29e0-4eef-82cd-fdc0036ef0f2","Type":"ContainerStarted","Data":"a77c121d900e7c4e29c8557c6d5b43c4d7c972f50bf1ea53116010408022eb64"} Jan 28 11:34:37 crc kubenswrapper[4804]: I0128 11:34:37.616848 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-r6vm7" Jan 28 11:34:37 crc kubenswrapper[4804]: I0128 11:34:37.648710 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-54757c584b-b2pq8" podStartSLOduration=5.175828389 podStartE2EDuration="10.648673506s" podCreationTimestamp="2026-01-28 11:34:27 +0000 UTC" firstStartedPulling="2026-01-28 11:34:27.732885784 +0000 UTC m=+743.527765768" lastFinishedPulling="2026-01-28 11:34:33.205730911 +0000 UTC m=+749.000610885" observedRunningTime="2026-01-28 11:34:33.648684206 +0000 UTC m=+749.443564200" watchObservedRunningTime="2026-01-28 11:34:37.648673506 +0000 UTC m=+753.443553540" Jan 28 11:34:37 crc kubenswrapper[4804]: I0128 11:34:37.870771 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:37 crc kubenswrapper[4804]: I0128 11:34:37.870877 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:37 crc kubenswrapper[4804]: I0128 11:34:37.875418 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:38 crc kubenswrapper[4804]: I0128 11:34:38.671617 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5d7d54b946-gb592" Jan 28 11:34:38 crc kubenswrapper[4804]: I0128 11:34:38.745155 4804 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 28 11:34:38 crc kubenswrapper[4804]: I0128 11:34:38.759397 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-xghdb"] Jan 28 11:34:42 crc kubenswrapper[4804]: I0128 11:34:42.582169 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:34:42 crc kubenswrapper[4804]: I0128 11:34:42.582518 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:34:42 crc kubenswrapper[4804]: I0128 11:34:42.582587 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:34:42 crc kubenswrapper[4804]: I0128 11:34:42.583077 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"493f3a58ce9c84e61c12f35c4be8ff28af2862f186b7fdf44e3a4a848a20107b"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 11:34:42 crc kubenswrapper[4804]: I0128 11:34:42.583133 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://493f3a58ce9c84e61c12f35c4be8ff28af2862f186b7fdf44e3a4a848a20107b" gracePeriod=600 Jan 28 11:34:43 crc kubenswrapper[4804]: I0128 11:34:43.699905 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="493f3a58ce9c84e61c12f35c4be8ff28af2862f186b7fdf44e3a4a848a20107b" exitCode=0 Jan 28 11:34:43 crc kubenswrapper[4804]: I0128 11:34:43.699940 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"493f3a58ce9c84e61c12f35c4be8ff28af2862f186b7fdf44e3a4a848a20107b"} Jan 28 11:34:43 crc kubenswrapper[4804]: I0128 11:34:43.700472 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"e2d1117c737baf6cd27ef1229c3435bfc59febfb941c2b84b434e736df46abc8"} Jan 28 11:34:43 crc kubenswrapper[4804]: I0128 11:34:43.700493 4804 scope.go:117] "RemoveContainer" containerID="c7b5b9b6b8ef791eab510f91481d8192b718ad6748767af1fa3c3c5a88adba6c" Jan 28 11:34:47 crc kubenswrapper[4804]: I0128 11:34:47.889196 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-8474b5b9d8-c5t8z" Jan 28 11:35:00 crc kubenswrapper[4804]: I0128 11:35:00.789423 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh"] Jan 28 11:35:00 crc kubenswrapper[4804]: I0128 11:35:00.791854 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:00 crc kubenswrapper[4804]: I0128 11:35:00.795406 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 11:35:00 crc kubenswrapper[4804]: I0128 11:35:00.809344 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh"] Jan 28 11:35:00 crc kubenswrapper[4804]: I0128 11:35:00.977692 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:00 crc kubenswrapper[4804]: I0128 11:35:00.977743 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8nsc\" (UniqueName: \"kubernetes.io/projected/1c8da098-aace-4ed5-8846-6fff6aee19be-kube-api-access-w8nsc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:00 crc kubenswrapper[4804]: I0128 11:35:00.977791 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:01 crc kubenswrapper[4804]: I0128 11:35:01.079617 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:01 crc kubenswrapper[4804]: I0128 11:35:01.079764 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8nsc\" (UniqueName: \"kubernetes.io/projected/1c8da098-aace-4ed5-8846-6fff6aee19be-kube-api-access-w8nsc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:01 crc kubenswrapper[4804]: I0128 11:35:01.079889 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:01 crc kubenswrapper[4804]: I0128 11:35:01.080408 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:01 crc kubenswrapper[4804]: I0128 11:35:01.080455 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:01 crc kubenswrapper[4804]: I0128 11:35:01.107138 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8nsc\" (UniqueName: \"kubernetes.io/projected/1c8da098-aace-4ed5-8846-6fff6aee19be-kube-api-access-w8nsc\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:01 crc kubenswrapper[4804]: I0128 11:35:01.111540 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:01 crc kubenswrapper[4804]: I0128 11:35:01.317208 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh"] Jan 28 11:35:01 crc kubenswrapper[4804]: W0128 11:35:01.326474 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1c8da098_aace_4ed5_8846_6fff6aee19be.slice/crio-de0169d1b66bb0623ae11e6f50c444d3b41e2a49908a785e593d432763ef543b WatchSource:0}: Error finding container de0169d1b66bb0623ae11e6f50c444d3b41e2a49908a785e593d432763ef543b: Status 404 returned error can't find the container with id de0169d1b66bb0623ae11e6f50c444d3b41e2a49908a785e593d432763ef543b Jan 28 11:35:01 crc kubenswrapper[4804]: I0128 11:35:01.825666 4804 generic.go:334] "Generic (PLEG): container finished" podID="1c8da098-aace-4ed5-8846-6fff6aee19be" containerID="16087a95badf2d73ede6e46431321e6810ab78fafe1112764ae45ce7d1f66d24" exitCode=0 Jan 28 11:35:01 crc kubenswrapper[4804]: I0128 11:35:01.825765 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" event={"ID":"1c8da098-aace-4ed5-8846-6fff6aee19be","Type":"ContainerDied","Data":"16087a95badf2d73ede6e46431321e6810ab78fafe1112764ae45ce7d1f66d24"} Jan 28 11:35:01 crc kubenswrapper[4804]: I0128 11:35:01.826148 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" event={"ID":"1c8da098-aace-4ed5-8846-6fff6aee19be","Type":"ContainerStarted","Data":"de0169d1b66bb0623ae11e6f50c444d3b41e2a49908a785e593d432763ef543b"} Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.120214 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-kjhw2"] Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.121778 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.134058 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kjhw2"] Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.209604 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-catalog-content\") pod \"redhat-operators-kjhw2\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.209756 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-utilities\") pod \"redhat-operators-kjhw2\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.209782 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p27zt\" (UniqueName: \"kubernetes.io/projected/58d15df8-f5ee-4982-87f1-af5e3ec371ba-kube-api-access-p27zt\") pod \"redhat-operators-kjhw2\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.310515 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-utilities\") pod \"redhat-operators-kjhw2\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.310573 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p27zt\" (UniqueName: \"kubernetes.io/projected/58d15df8-f5ee-4982-87f1-af5e3ec371ba-kube-api-access-p27zt\") pod \"redhat-operators-kjhw2\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.310606 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-catalog-content\") pod \"redhat-operators-kjhw2\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.311128 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-utilities\") pod \"redhat-operators-kjhw2\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.311167 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-catalog-content\") pod \"redhat-operators-kjhw2\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.328711 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p27zt\" (UniqueName: \"kubernetes.io/projected/58d15df8-f5ee-4982-87f1-af5e3ec371ba-kube-api-access-p27zt\") pod \"redhat-operators-kjhw2\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.443118 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.652427 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-kjhw2"] Jan 28 11:35:03 crc kubenswrapper[4804]: W0128 11:35:03.665524 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod58d15df8_f5ee_4982_87f1_af5e3ec371ba.slice/crio-8a966fc586e419cf67f7b38fc6541cd18a4eae4fc2047fa91dcc484c80d020a8 WatchSource:0}: Error finding container 8a966fc586e419cf67f7b38fc6541cd18a4eae4fc2047fa91dcc484c80d020a8: Status 404 returned error can't find the container with id 8a966fc586e419cf67f7b38fc6541cd18a4eae4fc2047fa91dcc484c80d020a8 Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.810393 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-xghdb" podUID="bf13c867-7c3e-4845-a6c8-f25700c31666" containerName="console" containerID="cri-o://be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f" gracePeriod=15 Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.838824 4804 generic.go:334] "Generic (PLEG): container finished" podID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" containerID="ef47a0f93824c160c7b1829633f77e89678a9d3040c426b0d6233119c875e72f" exitCode=0 Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.838896 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjhw2" event={"ID":"58d15df8-f5ee-4982-87f1-af5e3ec371ba","Type":"ContainerDied","Data":"ef47a0f93824c160c7b1829633f77e89678a9d3040c426b0d6233119c875e72f"} Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.839000 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjhw2" event={"ID":"58d15df8-f5ee-4982-87f1-af5e3ec371ba","Type":"ContainerStarted","Data":"8a966fc586e419cf67f7b38fc6541cd18a4eae4fc2047fa91dcc484c80d020a8"} Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.841873 4804 generic.go:334] "Generic (PLEG): container finished" podID="1c8da098-aace-4ed5-8846-6fff6aee19be" containerID="06684d1398fab6fb63e2ee12ea3e1967dbea452d5ada621bb30b4e1ff8f87295" exitCode=0 Jan 28 11:35:03 crc kubenswrapper[4804]: I0128 11:35:03.841925 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" event={"ID":"1c8da098-aace-4ed5-8846-6fff6aee19be","Type":"ContainerDied","Data":"06684d1398fab6fb63e2ee12ea3e1967dbea452d5ada621bb30b4e1ff8f87295"} Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.116736 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-xghdb_bf13c867-7c3e-4845-a6c8-f25700c31666/console/0.log" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.116805 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.221010 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dtt9h\" (UniqueName: \"kubernetes.io/projected/bf13c867-7c3e-4845-a6c8-f25700c31666-kube-api-access-dtt9h\") pod \"bf13c867-7c3e-4845-a6c8-f25700c31666\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.221321 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-oauth-serving-cert\") pod \"bf13c867-7c3e-4845-a6c8-f25700c31666\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.221350 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-service-ca\") pod \"bf13c867-7c3e-4845-a6c8-f25700c31666\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.221393 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-console-config\") pod \"bf13c867-7c3e-4845-a6c8-f25700c31666\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.221453 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-serving-cert\") pod \"bf13c867-7c3e-4845-a6c8-f25700c31666\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.221484 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-oauth-config\") pod \"bf13c867-7c3e-4845-a6c8-f25700c31666\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.221511 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-trusted-ca-bundle\") pod \"bf13c867-7c3e-4845-a6c8-f25700c31666\" (UID: \"bf13c867-7c3e-4845-a6c8-f25700c31666\") " Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.222816 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-console-config" (OuterVolumeSpecName: "console-config") pod "bf13c867-7c3e-4845-a6c8-f25700c31666" (UID: "bf13c867-7c3e-4845-a6c8-f25700c31666"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.223090 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-service-ca" (OuterVolumeSpecName: "service-ca") pod "bf13c867-7c3e-4845-a6c8-f25700c31666" (UID: "bf13c867-7c3e-4845-a6c8-f25700c31666"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.223230 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "bf13c867-7c3e-4845-a6c8-f25700c31666" (UID: "bf13c867-7c3e-4845-a6c8-f25700c31666"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.223360 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "bf13c867-7c3e-4845-a6c8-f25700c31666" (UID: "bf13c867-7c3e-4845-a6c8-f25700c31666"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.227541 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf13c867-7c3e-4845-a6c8-f25700c31666-kube-api-access-dtt9h" (OuterVolumeSpecName: "kube-api-access-dtt9h") pod "bf13c867-7c3e-4845-a6c8-f25700c31666" (UID: "bf13c867-7c3e-4845-a6c8-f25700c31666"). InnerVolumeSpecName "kube-api-access-dtt9h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.228389 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "bf13c867-7c3e-4845-a6c8-f25700c31666" (UID: "bf13c867-7c3e-4845-a6c8-f25700c31666"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.228912 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "bf13c867-7c3e-4845-a6c8-f25700c31666" (UID: "bf13c867-7c3e-4845-a6c8-f25700c31666"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.327944 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dtt9h\" (UniqueName: \"kubernetes.io/projected/bf13c867-7c3e-4845-a6c8-f25700c31666-kube-api-access-dtt9h\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.327977 4804 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.327995 4804 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-service-ca\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.328006 4804 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-console-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.328019 4804 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.328030 4804 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bf13c867-7c3e-4845-a6c8-f25700c31666-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.328040 4804 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf13c867-7c3e-4845-a6c8-f25700c31666-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.867941 4804 generic.go:334] "Generic (PLEG): container finished" podID="1c8da098-aace-4ed5-8846-6fff6aee19be" containerID="a3c8691c51f616b7604b0930041e17ae2ec23f7cf62d3089ecd56dc16dca0b5b" exitCode=0 Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.868022 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" event={"ID":"1c8da098-aace-4ed5-8846-6fff6aee19be","Type":"ContainerDied","Data":"a3c8691c51f616b7604b0930041e17ae2ec23f7cf62d3089ecd56dc16dca0b5b"} Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.871686 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-xghdb_bf13c867-7c3e-4845-a6c8-f25700c31666/console/0.log" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.871729 4804 generic.go:334] "Generic (PLEG): container finished" podID="bf13c867-7c3e-4845-a6c8-f25700c31666" containerID="be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f" exitCode=2 Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.871773 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xghdb" event={"ID":"bf13c867-7c3e-4845-a6c8-f25700c31666","Type":"ContainerDied","Data":"be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f"} Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.871794 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xghdb" event={"ID":"bf13c867-7c3e-4845-a6c8-f25700c31666","Type":"ContainerDied","Data":"39c6be7d2c6b604e29ab674e70547e5294e550d001aed4bfc7286a6d8fd167c8"} Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.871813 4804 scope.go:117] "RemoveContainer" containerID="be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.872018 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xghdb" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.877687 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjhw2" event={"ID":"58d15df8-f5ee-4982-87f1-af5e3ec371ba","Type":"ContainerStarted","Data":"b7a3e5ad643505de7205638ab542b48003d62d08ea5a488d01fba8a7a1e4e731"} Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.905823 4804 scope.go:117] "RemoveContainer" containerID="be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f" Jan 28 11:35:04 crc kubenswrapper[4804]: E0128 11:35:04.907222 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f\": container with ID starting with be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f not found: ID does not exist" containerID="be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.907603 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f"} err="failed to get container status \"be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f\": rpc error: code = NotFound desc = could not find container \"be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f\": container with ID starting with be99b0f81bacd1a775090bef502aed139d0719ef93892f3a67c9fb54f17d296f not found: ID does not exist" Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.964368 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-xghdb"] Jan 28 11:35:04 crc kubenswrapper[4804]: I0128 11:35:04.969323 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-xghdb"] Jan 28 11:35:05 crc kubenswrapper[4804]: I0128 11:35:05.889740 4804 generic.go:334] "Generic (PLEG): container finished" podID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" containerID="b7a3e5ad643505de7205638ab542b48003d62d08ea5a488d01fba8a7a1e4e731" exitCode=0 Jan 28 11:35:05 crc kubenswrapper[4804]: I0128 11:35:05.889785 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjhw2" event={"ID":"58d15df8-f5ee-4982-87f1-af5e3ec371ba","Type":"ContainerDied","Data":"b7a3e5ad643505de7205638ab542b48003d62d08ea5a488d01fba8a7a1e4e731"} Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.144558 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.251305 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8nsc\" (UniqueName: \"kubernetes.io/projected/1c8da098-aace-4ed5-8846-6fff6aee19be-kube-api-access-w8nsc\") pod \"1c8da098-aace-4ed5-8846-6fff6aee19be\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.251419 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-util\") pod \"1c8da098-aace-4ed5-8846-6fff6aee19be\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.251459 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-bundle\") pod \"1c8da098-aace-4ed5-8846-6fff6aee19be\" (UID: \"1c8da098-aace-4ed5-8846-6fff6aee19be\") " Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.253136 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-bundle" (OuterVolumeSpecName: "bundle") pod "1c8da098-aace-4ed5-8846-6fff6aee19be" (UID: "1c8da098-aace-4ed5-8846-6fff6aee19be"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.259140 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c8da098-aace-4ed5-8846-6fff6aee19be-kube-api-access-w8nsc" (OuterVolumeSpecName: "kube-api-access-w8nsc") pod "1c8da098-aace-4ed5-8846-6fff6aee19be" (UID: "1c8da098-aace-4ed5-8846-6fff6aee19be"). InnerVolumeSpecName "kube-api-access-w8nsc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.273108 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-util" (OuterVolumeSpecName: "util") pod "1c8da098-aace-4ed5-8846-6fff6aee19be" (UID: "1c8da098-aace-4ed5-8846-6fff6aee19be"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.352780 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8nsc\" (UniqueName: \"kubernetes.io/projected/1c8da098-aace-4ed5-8846-6fff6aee19be-kube-api-access-w8nsc\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.352833 4804 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-util\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.352848 4804 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/1c8da098-aace-4ed5-8846-6fff6aee19be-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.897834 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" event={"ID":"1c8da098-aace-4ed5-8846-6fff6aee19be","Type":"ContainerDied","Data":"de0169d1b66bb0623ae11e6f50c444d3b41e2a49908a785e593d432763ef543b"} Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.897877 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de0169d1b66bb0623ae11e6f50c444d3b41e2a49908a785e593d432763ef543b" Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.897920 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh" Jan 28 11:35:06 crc kubenswrapper[4804]: I0128 11:35:06.923862 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf13c867-7c3e-4845-a6c8-f25700c31666" path="/var/lib/kubelet/pods/bf13c867-7c3e-4845-a6c8-f25700c31666/volumes" Jan 28 11:35:07 crc kubenswrapper[4804]: I0128 11:35:07.905466 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjhw2" event={"ID":"58d15df8-f5ee-4982-87f1-af5e3ec371ba","Type":"ContainerStarted","Data":"a3ce9283976204b122c4c548f11f6aac4a29ddef5a3bfd09f9aeda1484fdf490"} Jan 28 11:35:07 crc kubenswrapper[4804]: I0128 11:35:07.929528 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-kjhw2" podStartSLOduration=1.761607767 podStartE2EDuration="4.92951005s" podCreationTimestamp="2026-01-28 11:35:03 +0000 UTC" firstStartedPulling="2026-01-28 11:35:03.840029856 +0000 UTC m=+779.634909840" lastFinishedPulling="2026-01-28 11:35:07.007932139 +0000 UTC m=+782.802812123" observedRunningTime="2026-01-28 11:35:07.921440225 +0000 UTC m=+783.716320239" watchObservedRunningTime="2026-01-28 11:35:07.92951005 +0000 UTC m=+783.724390044" Jan 28 11:35:13 crc kubenswrapper[4804]: I0128 11:35:13.444224 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:13 crc kubenswrapper[4804]: I0128 11:35:13.444330 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:13 crc kubenswrapper[4804]: I0128 11:35:13.500352 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:13 crc kubenswrapper[4804]: I0128 11:35:13.974582 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:14 crc kubenswrapper[4804]: E0128 11:35:14.555459 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf13c867_7c3e_4845_a6c8_f25700c31666.slice\": RecentStats: unable to find data in memory cache]" Jan 28 11:35:14 crc kubenswrapper[4804]: I0128 11:35:14.911914 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kjhw2"] Jan 28 11:35:15 crc kubenswrapper[4804]: I0128 11:35:15.944468 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-kjhw2" podUID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" containerName="registry-server" containerID="cri-o://a3ce9283976204b122c4c548f11f6aac4a29ddef5a3bfd09f9aeda1484fdf490" gracePeriod=2 Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.086925 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr"] Jan 28 11:35:16 crc kubenswrapper[4804]: E0128 11:35:16.087159 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8da098-aace-4ed5-8846-6fff6aee19be" containerName="pull" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.087171 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8da098-aace-4ed5-8846-6fff6aee19be" containerName="pull" Jan 28 11:35:16 crc kubenswrapper[4804]: E0128 11:35:16.087184 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8da098-aace-4ed5-8846-6fff6aee19be" containerName="extract" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.087191 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8da098-aace-4ed5-8846-6fff6aee19be" containerName="extract" Jan 28 11:35:16 crc kubenswrapper[4804]: E0128 11:35:16.087205 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf13c867-7c3e-4845-a6c8-f25700c31666" containerName="console" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.087211 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf13c867-7c3e-4845-a6c8-f25700c31666" containerName="console" Jan 28 11:35:16 crc kubenswrapper[4804]: E0128 11:35:16.087222 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c8da098-aace-4ed5-8846-6fff6aee19be" containerName="util" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.087228 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c8da098-aace-4ed5-8846-6fff6aee19be" containerName="util" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.087317 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c8da098-aace-4ed5-8846-6fff6aee19be" containerName="extract" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.087326 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf13c867-7c3e-4845-a6c8-f25700c31666" containerName="console" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.087877 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.090851 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.091164 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.091305 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-csf86" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.091432 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.105800 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.161068 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr"] Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.291841 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a0eda12d-b723-4a3a-8f2b-916de07b279c-webhook-cert\") pod \"metallb-operator-controller-manager-6b85b59588-rf4wr\" (UID: \"a0eda12d-b723-4a3a-8f2b-916de07b279c\") " pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.291954 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tp5hl\" (UniqueName: \"kubernetes.io/projected/a0eda12d-b723-4a3a-8f2b-916de07b279c-kube-api-access-tp5hl\") pod \"metallb-operator-controller-manager-6b85b59588-rf4wr\" (UID: \"a0eda12d-b723-4a3a-8f2b-916de07b279c\") " pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.291993 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a0eda12d-b723-4a3a-8f2b-916de07b279c-apiservice-cert\") pod \"metallb-operator-controller-manager-6b85b59588-rf4wr\" (UID: \"a0eda12d-b723-4a3a-8f2b-916de07b279c\") " pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.337359 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427"] Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.338312 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.345354 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.345565 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-4pbr9" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.345636 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.349807 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427"] Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.393541 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tp5hl\" (UniqueName: \"kubernetes.io/projected/a0eda12d-b723-4a3a-8f2b-916de07b279c-kube-api-access-tp5hl\") pod \"metallb-operator-controller-manager-6b85b59588-rf4wr\" (UID: \"a0eda12d-b723-4a3a-8f2b-916de07b279c\") " pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.393608 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a0eda12d-b723-4a3a-8f2b-916de07b279c-apiservice-cert\") pod \"metallb-operator-controller-manager-6b85b59588-rf4wr\" (UID: \"a0eda12d-b723-4a3a-8f2b-916de07b279c\") " pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.393641 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/13606290-8fc4-4792-a328-207ee9a1994e-apiservice-cert\") pod \"metallb-operator-webhook-server-6b844cd4fc-mn427\" (UID: \"13606290-8fc4-4792-a328-207ee9a1994e\") " pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.393676 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a0eda12d-b723-4a3a-8f2b-916de07b279c-webhook-cert\") pod \"metallb-operator-controller-manager-6b85b59588-rf4wr\" (UID: \"a0eda12d-b723-4a3a-8f2b-916de07b279c\") " pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.393723 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlpw8\" (UniqueName: \"kubernetes.io/projected/13606290-8fc4-4792-a328-207ee9a1994e-kube-api-access-zlpw8\") pod \"metallb-operator-webhook-server-6b844cd4fc-mn427\" (UID: \"13606290-8fc4-4792-a328-207ee9a1994e\") " pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.393764 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13606290-8fc4-4792-a328-207ee9a1994e-webhook-cert\") pod \"metallb-operator-webhook-server-6b844cd4fc-mn427\" (UID: \"13606290-8fc4-4792-a328-207ee9a1994e\") " pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.399770 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a0eda12d-b723-4a3a-8f2b-916de07b279c-apiservice-cert\") pod \"metallb-operator-controller-manager-6b85b59588-rf4wr\" (UID: \"a0eda12d-b723-4a3a-8f2b-916de07b279c\") " pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.399757 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a0eda12d-b723-4a3a-8f2b-916de07b279c-webhook-cert\") pod \"metallb-operator-controller-manager-6b85b59588-rf4wr\" (UID: \"a0eda12d-b723-4a3a-8f2b-916de07b279c\") " pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.410627 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tp5hl\" (UniqueName: \"kubernetes.io/projected/a0eda12d-b723-4a3a-8f2b-916de07b279c-kube-api-access-tp5hl\") pod \"metallb-operator-controller-manager-6b85b59588-rf4wr\" (UID: \"a0eda12d-b723-4a3a-8f2b-916de07b279c\") " pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.494680 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/13606290-8fc4-4792-a328-207ee9a1994e-apiservice-cert\") pod \"metallb-operator-webhook-server-6b844cd4fc-mn427\" (UID: \"13606290-8fc4-4792-a328-207ee9a1994e\") " pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.495140 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlpw8\" (UniqueName: \"kubernetes.io/projected/13606290-8fc4-4792-a328-207ee9a1994e-kube-api-access-zlpw8\") pod \"metallb-operator-webhook-server-6b844cd4fc-mn427\" (UID: \"13606290-8fc4-4792-a328-207ee9a1994e\") " pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.495189 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13606290-8fc4-4792-a328-207ee9a1994e-webhook-cert\") pod \"metallb-operator-webhook-server-6b844cd4fc-mn427\" (UID: \"13606290-8fc4-4792-a328-207ee9a1994e\") " pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.497633 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/13606290-8fc4-4792-a328-207ee9a1994e-apiservice-cert\") pod \"metallb-operator-webhook-server-6b844cd4fc-mn427\" (UID: \"13606290-8fc4-4792-a328-207ee9a1994e\") " pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.498364 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/13606290-8fc4-4792-a328-207ee9a1994e-webhook-cert\") pod \"metallb-operator-webhook-server-6b844cd4fc-mn427\" (UID: \"13606290-8fc4-4792-a328-207ee9a1994e\") " pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.512477 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlpw8\" (UniqueName: \"kubernetes.io/projected/13606290-8fc4-4792-a328-207ee9a1994e-kube-api-access-zlpw8\") pod \"metallb-operator-webhook-server-6b844cd4fc-mn427\" (UID: \"13606290-8fc4-4792-a328-207ee9a1994e\") " pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.675120 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.705998 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.965461 4804 generic.go:334] "Generic (PLEG): container finished" podID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" containerID="a3ce9283976204b122c4c548f11f6aac4a29ddef5a3bfd09f9aeda1484fdf490" exitCode=0 Jan 28 11:35:16 crc kubenswrapper[4804]: I0128 11:35:16.965507 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjhw2" event={"ID":"58d15df8-f5ee-4982-87f1-af5e3ec371ba","Type":"ContainerDied","Data":"a3ce9283976204b122c4c548f11f6aac4a29ddef5a3bfd09f9aeda1484fdf490"} Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.102574 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.168565 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427"] Jan 28 11:35:17 crc kubenswrapper[4804]: W0128 11:35:17.178704 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod13606290_8fc4_4792_a328_207ee9a1994e.slice/crio-38d940d65cf11503d2d123068534aabfbaba360d993dd470f8f1177c21136a63 WatchSource:0}: Error finding container 38d940d65cf11503d2d123068534aabfbaba360d993dd470f8f1177c21136a63: Status 404 returned error can't find the container with id 38d940d65cf11503d2d123068534aabfbaba360d993dd470f8f1177c21136a63 Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.206765 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-catalog-content\") pod \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.206876 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p27zt\" (UniqueName: \"kubernetes.io/projected/58d15df8-f5ee-4982-87f1-af5e3ec371ba-kube-api-access-p27zt\") pod \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.206978 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-utilities\") pod \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\" (UID: \"58d15df8-f5ee-4982-87f1-af5e3ec371ba\") " Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.208153 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-utilities" (OuterVolumeSpecName: "utilities") pod "58d15df8-f5ee-4982-87f1-af5e3ec371ba" (UID: "58d15df8-f5ee-4982-87f1-af5e3ec371ba"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.211316 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58d15df8-f5ee-4982-87f1-af5e3ec371ba-kube-api-access-p27zt" (OuterVolumeSpecName: "kube-api-access-p27zt") pod "58d15df8-f5ee-4982-87f1-af5e3ec371ba" (UID: "58d15df8-f5ee-4982-87f1-af5e3ec371ba"). InnerVolumeSpecName "kube-api-access-p27zt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.295849 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr"] Jan 28 11:35:17 crc kubenswrapper[4804]: W0128 11:35:17.300514 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda0eda12d_b723_4a3a_8f2b_916de07b279c.slice/crio-7ed7e10327b6e2b559275ccb42826da113d17b2ccbf73ec61a413b0d01769da8 WatchSource:0}: Error finding container 7ed7e10327b6e2b559275ccb42826da113d17b2ccbf73ec61a413b0d01769da8: Status 404 returned error can't find the container with id 7ed7e10327b6e2b559275ccb42826da113d17b2ccbf73ec61a413b0d01769da8 Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.307859 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p27zt\" (UniqueName: \"kubernetes.io/projected/58d15df8-f5ee-4982-87f1-af5e3ec371ba-kube-api-access-p27zt\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.307893 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.330430 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "58d15df8-f5ee-4982-87f1-af5e3ec371ba" (UID: "58d15df8-f5ee-4982-87f1-af5e3ec371ba"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.409240 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/58d15df8-f5ee-4982-87f1-af5e3ec371ba-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.972861 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-kjhw2" event={"ID":"58d15df8-f5ee-4982-87f1-af5e3ec371ba","Type":"ContainerDied","Data":"8a966fc586e419cf67f7b38fc6541cd18a4eae4fc2047fa91dcc484c80d020a8"} Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.972929 4804 scope.go:117] "RemoveContainer" containerID="a3ce9283976204b122c4c548f11f6aac4a29ddef5a3bfd09f9aeda1484fdf490" Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.973041 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-kjhw2" Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.985928 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" event={"ID":"a0eda12d-b723-4a3a-8f2b-916de07b279c","Type":"ContainerStarted","Data":"7ed7e10327b6e2b559275ccb42826da113d17b2ccbf73ec61a413b0d01769da8"} Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.989553 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" event={"ID":"13606290-8fc4-4792-a328-207ee9a1994e","Type":"ContainerStarted","Data":"38d940d65cf11503d2d123068534aabfbaba360d993dd470f8f1177c21136a63"} Jan 28 11:35:17 crc kubenswrapper[4804]: I0128 11:35:17.998228 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-kjhw2"] Jan 28 11:35:18 crc kubenswrapper[4804]: I0128 11:35:18.004672 4804 scope.go:117] "RemoveContainer" containerID="b7a3e5ad643505de7205638ab542b48003d62d08ea5a488d01fba8a7a1e4e731" Jan 28 11:35:18 crc kubenswrapper[4804]: I0128 11:35:18.007040 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-kjhw2"] Jan 28 11:35:18 crc kubenswrapper[4804]: I0128 11:35:18.024036 4804 scope.go:117] "RemoveContainer" containerID="ef47a0f93824c160c7b1829633f77e89678a9d3040c426b0d6233119c875e72f" Jan 28 11:35:18 crc kubenswrapper[4804]: I0128 11:35:18.924089 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" path="/var/lib/kubelet/pods/58d15df8-f5ee-4982-87f1-af5e3ec371ba/volumes" Jan 28 11:35:23 crc kubenswrapper[4804]: I0128 11:35:23.020718 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" event={"ID":"13606290-8fc4-4792-a328-207ee9a1994e","Type":"ContainerStarted","Data":"9972640d9b749e4a4f568799e85cef0cb711c4450d8a295c95f987cc8e1e6c6f"} Jan 28 11:35:23 crc kubenswrapper[4804]: I0128 11:35:23.021155 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:23 crc kubenswrapper[4804]: I0128 11:35:23.022418 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" event={"ID":"a0eda12d-b723-4a3a-8f2b-916de07b279c","Type":"ContainerStarted","Data":"13c9557b28ef9973ccd811e27b5368c68d7053d774b9caec17246f58f322b60b"} Jan 28 11:35:23 crc kubenswrapper[4804]: I0128 11:35:23.022564 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:23 crc kubenswrapper[4804]: I0128 11:35:23.038748 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" podStartSLOduration=2.132870489 podStartE2EDuration="7.038730643s" podCreationTimestamp="2026-01-28 11:35:16 +0000 UTC" firstStartedPulling="2026-01-28 11:35:17.18083106 +0000 UTC m=+792.975711044" lastFinishedPulling="2026-01-28 11:35:22.086691214 +0000 UTC m=+797.881571198" observedRunningTime="2026-01-28 11:35:23.037636327 +0000 UTC m=+798.832516311" watchObservedRunningTime="2026-01-28 11:35:23.038730643 +0000 UTC m=+798.833610627" Jan 28 11:35:23 crc kubenswrapper[4804]: I0128 11:35:23.055797 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" podStartSLOduration=2.287949905 podStartE2EDuration="7.055775682s" podCreationTimestamp="2026-01-28 11:35:16 +0000 UTC" firstStartedPulling="2026-01-28 11:35:17.303478062 +0000 UTC m=+793.098358036" lastFinishedPulling="2026-01-28 11:35:22.071303829 +0000 UTC m=+797.866183813" observedRunningTime="2026-01-28 11:35:23.054396597 +0000 UTC m=+798.849276581" watchObservedRunningTime="2026-01-28 11:35:23.055775682 +0000 UTC m=+798.850655666" Jan 28 11:35:24 crc kubenswrapper[4804]: E0128 11:35:24.679449 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf13c867_7c3e_4845_a6c8_f25700c31666.slice\": RecentStats: unable to find data in memory cache]" Jan 28 11:35:34 crc kubenswrapper[4804]: E0128 11:35:34.831850 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf13c867_7c3e_4845_a6c8_f25700c31666.slice\": RecentStats: unable to find data in memory cache]" Jan 28 11:35:36 crc kubenswrapper[4804]: I0128 11:35:36.679984 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6b844cd4fc-mn427" Jan 28 11:35:44 crc kubenswrapper[4804]: E0128 11:35:44.961477 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf13c867_7c3e_4845_a6c8_f25700c31666.slice\": RecentStats: unable to find data in memory cache]" Jan 28 11:35:55 crc kubenswrapper[4804]: E0128 11:35:55.085923 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf13c867_7c3e_4845_a6c8_f25700c31666.slice\": RecentStats: unable to find data in memory cache]" Jan 28 11:35:56 crc kubenswrapper[4804]: I0128 11:35:56.709175 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6b85b59588-rf4wr" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.426384 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6"] Jan 28 11:35:57 crc kubenswrapper[4804]: E0128 11:35:57.426600 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" containerName="registry-server" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.426613 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" containerName="registry-server" Jan 28 11:35:57 crc kubenswrapper[4804]: E0128 11:35:57.426629 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" containerName="extract-content" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.426635 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" containerName="extract-content" Jan 28 11:35:57 crc kubenswrapper[4804]: E0128 11:35:57.426652 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" containerName="extract-utilities" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.426658 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" containerName="extract-utilities" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.426760 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="58d15df8-f5ee-4982-87f1-af5e3ec371ba" containerName="registry-server" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.427157 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.429532 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.429599 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-cwkld" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.440431 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-5kdlz"] Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.443350 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.444243 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6"] Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.445632 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.445819 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.513212 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-kcvj8"] Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.514304 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.515909 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.518609 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.518612 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.519301 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-7kw6p" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.521869 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-rfhfx"] Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.523035 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.526330 4804 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.545005 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-rfhfx"] Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593353 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9pgl\" (UniqueName: \"kubernetes.io/projected/1ae74e9e-799f-46bb-9a53-c8307c83203d-kube-api-access-k9pgl\") pod \"controller-6968d8fdc4-rfhfx\" (UID: \"1ae74e9e-799f-46bb-9a53-c8307c83203d\") " pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593410 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ae74e9e-799f-46bb-9a53-c8307c83203d-cert\") pod \"controller-6968d8fdc4-rfhfx\" (UID: \"1ae74e9e-799f-46bb-9a53-c8307c83203d\") " pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593436 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-memberlist\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593458 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-frr-conf\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593484 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-npbcx\" (UniqueName: \"kubernetes.io/projected/3ce00c89-f00d-43aa-9907-77bf331c3dbd-kube-api-access-npbcx\") pod \"frr-k8s-webhook-server-7df86c4f6c-cvlt6\" (UID: \"3ce00c89-f00d-43aa-9907-77bf331c3dbd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593544 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltg8n\" (UniqueName: \"kubernetes.io/projected/2fa1df7e-03c8-4931-ad89-222acae36030-kube-api-access-ltg8n\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593598 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ce00c89-f00d-43aa-9907-77bf331c3dbd-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-cvlt6\" (UID: \"3ce00c89-f00d-43aa-9907-77bf331c3dbd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593625 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-reloader\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593649 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-frr-sockets\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593687 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-metrics-certs\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593705 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-metrics\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593739 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2fa1df7e-03c8-4931-ad89-222acae36030-metallb-excludel2\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593761 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/45631116-4b02-448f-9158-18eaae682d9d-frr-startup\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593781 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1ae74e9e-799f-46bb-9a53-c8307c83203d-metrics-certs\") pod \"controller-6968d8fdc4-rfhfx\" (UID: \"1ae74e9e-799f-46bb-9a53-c8307c83203d\") " pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593802 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vdrs7\" (UniqueName: \"kubernetes.io/projected/45631116-4b02-448f-9158-18eaae682d9d-kube-api-access-vdrs7\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.593821 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45631116-4b02-448f-9158-18eaae682d9d-metrics-certs\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695489 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9pgl\" (UniqueName: \"kubernetes.io/projected/1ae74e9e-799f-46bb-9a53-c8307c83203d-kube-api-access-k9pgl\") pod \"controller-6968d8fdc4-rfhfx\" (UID: \"1ae74e9e-799f-46bb-9a53-c8307c83203d\") " pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695559 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ae74e9e-799f-46bb-9a53-c8307c83203d-cert\") pod \"controller-6968d8fdc4-rfhfx\" (UID: \"1ae74e9e-799f-46bb-9a53-c8307c83203d\") " pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695587 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-memberlist\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695610 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-frr-conf\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695639 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-npbcx\" (UniqueName: \"kubernetes.io/projected/3ce00c89-f00d-43aa-9907-77bf331c3dbd-kube-api-access-npbcx\") pod \"frr-k8s-webhook-server-7df86c4f6c-cvlt6\" (UID: \"3ce00c89-f00d-43aa-9907-77bf331c3dbd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695670 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltg8n\" (UniqueName: \"kubernetes.io/projected/2fa1df7e-03c8-4931-ad89-222acae36030-kube-api-access-ltg8n\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695699 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ce00c89-f00d-43aa-9907-77bf331c3dbd-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-cvlt6\" (UID: \"3ce00c89-f00d-43aa-9907-77bf331c3dbd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695723 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-reloader\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695745 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-frr-sockets\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: E0128 11:35:57.695758 4804 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695777 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-metrics-certs\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695797 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-metrics\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: E0128 11:35:57.695831 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-memberlist podName:2fa1df7e-03c8-4931-ad89-222acae36030 nodeName:}" failed. No retries permitted until 2026-01-28 11:35:58.195810413 +0000 UTC m=+833.990690397 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-memberlist") pod "speaker-kcvj8" (UID: "2fa1df7e-03c8-4931-ad89-222acae36030") : secret "metallb-memberlist" not found Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695855 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2fa1df7e-03c8-4931-ad89-222acae36030-metallb-excludel2\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695912 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/45631116-4b02-448f-9158-18eaae682d9d-frr-startup\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695939 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1ae74e9e-799f-46bb-9a53-c8307c83203d-metrics-certs\") pod \"controller-6968d8fdc4-rfhfx\" (UID: \"1ae74e9e-799f-46bb-9a53-c8307c83203d\") " pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695969 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vdrs7\" (UniqueName: \"kubernetes.io/projected/45631116-4b02-448f-9158-18eaae682d9d-kube-api-access-vdrs7\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.695985 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45631116-4b02-448f-9158-18eaae682d9d-metrics-certs\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.696280 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-metrics\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.696503 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-reloader\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: E0128 11:35:57.696651 4804 secret.go:188] Couldn't get secret metallb-system/controller-certs-secret: secret "controller-certs-secret" not found Jan 28 11:35:57 crc kubenswrapper[4804]: E0128 11:35:57.697036 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ae74e9e-799f-46bb-9a53-c8307c83203d-metrics-certs podName:1ae74e9e-799f-46bb-9a53-c8307c83203d nodeName:}" failed. No retries permitted until 2026-01-28 11:35:58.197009663 +0000 UTC m=+833.991889657 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/1ae74e9e-799f-46bb-9a53-c8307c83203d-metrics-certs") pod "controller-6968d8fdc4-rfhfx" (UID: "1ae74e9e-799f-46bb-9a53-c8307c83203d") : secret "controller-certs-secret" not found Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.696756 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-frr-conf\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.697303 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/2fa1df7e-03c8-4931-ad89-222acae36030-metallb-excludel2\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.700678 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/45631116-4b02-448f-9158-18eaae682d9d-frr-startup\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.700958 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/45631116-4b02-448f-9158-18eaae682d9d-frr-sockets\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.702380 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/3ce00c89-f00d-43aa-9907-77bf331c3dbd-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-cvlt6\" (UID: \"3ce00c89-f00d-43aa-9907-77bf331c3dbd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.703326 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/1ae74e9e-799f-46bb-9a53-c8307c83203d-cert\") pod \"controller-6968d8fdc4-rfhfx\" (UID: \"1ae74e9e-799f-46bb-9a53-c8307c83203d\") " pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.706406 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/45631116-4b02-448f-9158-18eaae682d9d-metrics-certs\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.712715 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-metrics-certs\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.728508 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9pgl\" (UniqueName: \"kubernetes.io/projected/1ae74e9e-799f-46bb-9a53-c8307c83203d-kube-api-access-k9pgl\") pod \"controller-6968d8fdc4-rfhfx\" (UID: \"1ae74e9e-799f-46bb-9a53-c8307c83203d\") " pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.731526 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vdrs7\" (UniqueName: \"kubernetes.io/projected/45631116-4b02-448f-9158-18eaae682d9d-kube-api-access-vdrs7\") pod \"frr-k8s-5kdlz\" (UID: \"45631116-4b02-448f-9158-18eaae682d9d\") " pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.750745 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-npbcx\" (UniqueName: \"kubernetes.io/projected/3ce00c89-f00d-43aa-9907-77bf331c3dbd-kube-api-access-npbcx\") pod \"frr-k8s-webhook-server-7df86c4f6c-cvlt6\" (UID: \"3ce00c89-f00d-43aa-9907-77bf331c3dbd\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.751520 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltg8n\" (UniqueName: \"kubernetes.io/projected/2fa1df7e-03c8-4931-ad89-222acae36030-kube-api-access-ltg8n\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:57 crc kubenswrapper[4804]: I0128 11:35:57.760586 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:35:58 crc kubenswrapper[4804]: I0128 11:35:58.043922 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" Jan 28 11:35:58 crc kubenswrapper[4804]: I0128 11:35:58.201251 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-memberlist\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:58 crc kubenswrapper[4804]: I0128 11:35:58.201645 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1ae74e9e-799f-46bb-9a53-c8307c83203d-metrics-certs\") pod \"controller-6968d8fdc4-rfhfx\" (UID: \"1ae74e9e-799f-46bb-9a53-c8307c83203d\") " pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:58 crc kubenswrapper[4804]: E0128 11:35:58.202076 4804 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 28 11:35:58 crc kubenswrapper[4804]: E0128 11:35:58.202165 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-memberlist podName:2fa1df7e-03c8-4931-ad89-222acae36030 nodeName:}" failed. No retries permitted until 2026-01-28 11:35:59.202139517 +0000 UTC m=+834.997019531 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-memberlist") pod "speaker-kcvj8" (UID: "2fa1df7e-03c8-4931-ad89-222acae36030") : secret "metallb-memberlist" not found Jan 28 11:35:58 crc kubenswrapper[4804]: I0128 11:35:58.208605 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/1ae74e9e-799f-46bb-9a53-c8307c83203d-metrics-certs\") pod \"controller-6968d8fdc4-rfhfx\" (UID: \"1ae74e9e-799f-46bb-9a53-c8307c83203d\") " pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:58 crc kubenswrapper[4804]: I0128 11:35:58.244511 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5kdlz" event={"ID":"45631116-4b02-448f-9158-18eaae682d9d","Type":"ContainerStarted","Data":"2d4b44323b47cdde41dc703804ae2564d57c7bcf91fd46ed7006b163b578f7cb"} Jan 28 11:35:58 crc kubenswrapper[4804]: I0128 11:35:58.422999 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6"] Jan 28 11:35:58 crc kubenswrapper[4804]: W0128 11:35:58.434978 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3ce00c89_f00d_43aa_9907_77bf331c3dbd.slice/crio-add055449512cae9b144681e999f78305ad7ab0fb5793ee5678368c16a9f71d1 WatchSource:0}: Error finding container add055449512cae9b144681e999f78305ad7ab0fb5793ee5678368c16a9f71d1: Status 404 returned error can't find the container with id add055449512cae9b144681e999f78305ad7ab0fb5793ee5678368c16a9f71d1 Jan 28 11:35:58 crc kubenswrapper[4804]: I0128 11:35:58.449213 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:58 crc kubenswrapper[4804]: I0128 11:35:58.628423 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-rfhfx"] Jan 28 11:35:58 crc kubenswrapper[4804]: W0128 11:35:58.631762 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ae74e9e_799f_46bb_9a53_c8307c83203d.slice/crio-e83e83b2fa055e9cfd0cc4e55ae4cceb07ac58c2c75c252ab1ad546e70dd1c29 WatchSource:0}: Error finding container e83e83b2fa055e9cfd0cc4e55ae4cceb07ac58c2c75c252ab1ad546e70dd1c29: Status 404 returned error can't find the container with id e83e83b2fa055e9cfd0cc4e55ae4cceb07ac58c2c75c252ab1ad546e70dd1c29 Jan 28 11:35:59 crc kubenswrapper[4804]: I0128 11:35:59.216729 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-memberlist\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:59 crc kubenswrapper[4804]: I0128 11:35:59.224547 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/2fa1df7e-03c8-4931-ad89-222acae36030-memberlist\") pod \"speaker-kcvj8\" (UID: \"2fa1df7e-03c8-4931-ad89-222acae36030\") " pod="metallb-system/speaker-kcvj8" Jan 28 11:35:59 crc kubenswrapper[4804]: I0128 11:35:59.251199 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" event={"ID":"3ce00c89-f00d-43aa-9907-77bf331c3dbd","Type":"ContainerStarted","Data":"add055449512cae9b144681e999f78305ad7ab0fb5793ee5678368c16a9f71d1"} Jan 28 11:35:59 crc kubenswrapper[4804]: I0128 11:35:59.253157 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-rfhfx" event={"ID":"1ae74e9e-799f-46bb-9a53-c8307c83203d","Type":"ContainerStarted","Data":"6a98a7ce8ac14193a02df20d7d1bd2e536ce1f2aa1631767b910db1de13874b6"} Jan 28 11:35:59 crc kubenswrapper[4804]: I0128 11:35:59.253205 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-rfhfx" event={"ID":"1ae74e9e-799f-46bb-9a53-c8307c83203d","Type":"ContainerStarted","Data":"c4163e95b8b9b2f31e1575121524d0fa4e97b835e27075aa84519e2e1ebd1e06"} Jan 28 11:35:59 crc kubenswrapper[4804]: I0128 11:35:59.253216 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-rfhfx" event={"ID":"1ae74e9e-799f-46bb-9a53-c8307c83203d","Type":"ContainerStarted","Data":"e83e83b2fa055e9cfd0cc4e55ae4cceb07ac58c2c75c252ab1ad546e70dd1c29"} Jan 28 11:35:59 crc kubenswrapper[4804]: I0128 11:35:59.254138 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:35:59 crc kubenswrapper[4804]: I0128 11:35:59.330118 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-kcvj8" Jan 28 11:35:59 crc kubenswrapper[4804]: W0128 11:35:59.352151 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fa1df7e_03c8_4931_ad89_222acae36030.slice/crio-20e8062fcfe9f526601892fe7e9fde0121731f48bd50ada342bef63e383cb2a9 WatchSource:0}: Error finding container 20e8062fcfe9f526601892fe7e9fde0121731f48bd50ada342bef63e383cb2a9: Status 404 returned error can't find the container with id 20e8062fcfe9f526601892fe7e9fde0121731f48bd50ada342bef63e383cb2a9 Jan 28 11:36:00 crc kubenswrapper[4804]: I0128 11:36:00.262218 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kcvj8" event={"ID":"2fa1df7e-03c8-4931-ad89-222acae36030","Type":"ContainerStarted","Data":"921e284e3d9b659bbf8816201bf640abe8dcf1ed3a64b6a82af30835099c71c8"} Jan 28 11:36:00 crc kubenswrapper[4804]: I0128 11:36:00.262302 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kcvj8" event={"ID":"2fa1df7e-03c8-4931-ad89-222acae36030","Type":"ContainerStarted","Data":"280ceb91f3066f5cecc81e9a83bc4961a7ff321707905f8b5bbc6d5d048e400d"} Jan 28 11:36:00 crc kubenswrapper[4804]: I0128 11:36:00.262314 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-kcvj8" event={"ID":"2fa1df7e-03c8-4931-ad89-222acae36030","Type":"ContainerStarted","Data":"20e8062fcfe9f526601892fe7e9fde0121731f48bd50ada342bef63e383cb2a9"} Jan 28 11:36:00 crc kubenswrapper[4804]: I0128 11:36:00.262455 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-kcvj8" Jan 28 11:36:00 crc kubenswrapper[4804]: I0128 11:36:00.282485 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-rfhfx" podStartSLOduration=3.282463555 podStartE2EDuration="3.282463555s" podCreationTimestamp="2026-01-28 11:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:35:59.276236839 +0000 UTC m=+835.071116823" watchObservedRunningTime="2026-01-28 11:36:00.282463555 +0000 UTC m=+836.077343539" Jan 28 11:36:00 crc kubenswrapper[4804]: I0128 11:36:00.283833 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-kcvj8" podStartSLOduration=3.283826831 podStartE2EDuration="3.283826831s" podCreationTimestamp="2026-01-28 11:35:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:36:00.280709138 +0000 UTC m=+836.075589122" watchObservedRunningTime="2026-01-28 11:36:00.283826831 +0000 UTC m=+836.078706815" Jan 28 11:36:05 crc kubenswrapper[4804]: I0128 11:36:05.309445 4804 generic.go:334] "Generic (PLEG): container finished" podID="45631116-4b02-448f-9158-18eaae682d9d" containerID="ede577a501563dd65541c3bce23272518eb5a3074520a28edc01707c0be6abde" exitCode=0 Jan 28 11:36:05 crc kubenswrapper[4804]: I0128 11:36:05.309492 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5kdlz" event={"ID":"45631116-4b02-448f-9158-18eaae682d9d","Type":"ContainerDied","Data":"ede577a501563dd65541c3bce23272518eb5a3074520a28edc01707c0be6abde"} Jan 28 11:36:05 crc kubenswrapper[4804]: I0128 11:36:05.312986 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" event={"ID":"3ce00c89-f00d-43aa-9907-77bf331c3dbd","Type":"ContainerStarted","Data":"1e62a4e66b969a2eeb364d6821e827820336e596ec172f2870ec6fd5370de40d"} Jan 28 11:36:05 crc kubenswrapper[4804]: I0128 11:36:05.313354 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" Jan 28 11:36:05 crc kubenswrapper[4804]: I0128 11:36:05.352241 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" podStartSLOduration=2.003251181 podStartE2EDuration="8.352215678s" podCreationTimestamp="2026-01-28 11:35:57 +0000 UTC" firstStartedPulling="2026-01-28 11:35:58.437253217 +0000 UTC m=+834.232133201" lastFinishedPulling="2026-01-28 11:36:04.786217724 +0000 UTC m=+840.581097698" observedRunningTime="2026-01-28 11:36:05.351689122 +0000 UTC m=+841.146569106" watchObservedRunningTime="2026-01-28 11:36:05.352215678 +0000 UTC m=+841.147095672" Jan 28 11:36:06 crc kubenswrapper[4804]: I0128 11:36:06.321394 4804 generic.go:334] "Generic (PLEG): container finished" podID="45631116-4b02-448f-9158-18eaae682d9d" containerID="6b15248246c605c8c463dd7f6c1e1b35ed2c356a3679882be60c7d698ddebd5d" exitCode=0 Jan 28 11:36:06 crc kubenswrapper[4804]: I0128 11:36:06.322097 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5kdlz" event={"ID":"45631116-4b02-448f-9158-18eaae682d9d","Type":"ContainerDied","Data":"6b15248246c605c8c463dd7f6c1e1b35ed2c356a3679882be60c7d698ddebd5d"} Jan 28 11:36:07 crc kubenswrapper[4804]: I0128 11:36:07.336760 4804 generic.go:334] "Generic (PLEG): container finished" podID="45631116-4b02-448f-9158-18eaae682d9d" containerID="c5c853034a4df40a34754ab55b0d3ab5bc52a7abb8ba8d9a849712955716ea6d" exitCode=0 Jan 28 11:36:07 crc kubenswrapper[4804]: I0128 11:36:07.336840 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5kdlz" event={"ID":"45631116-4b02-448f-9158-18eaae682d9d","Type":"ContainerDied","Data":"c5c853034a4df40a34754ab55b0d3ab5bc52a7abb8ba8d9a849712955716ea6d"} Jan 28 11:36:08 crc kubenswrapper[4804]: I0128 11:36:08.346108 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5kdlz" event={"ID":"45631116-4b02-448f-9158-18eaae682d9d","Type":"ContainerStarted","Data":"58337462fb4eeaca49ad37e0df7080d673d6d074b8175c194972c8a1ff44fd59"} Jan 28 11:36:08 crc kubenswrapper[4804]: I0128 11:36:08.346401 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5kdlz" event={"ID":"45631116-4b02-448f-9158-18eaae682d9d","Type":"ContainerStarted","Data":"f543bf86db90a3d2ce30beb92894630f79a6096e895364cc36da1cb382842757"} Jan 28 11:36:08 crc kubenswrapper[4804]: I0128 11:36:08.346411 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5kdlz" event={"ID":"45631116-4b02-448f-9158-18eaae682d9d","Type":"ContainerStarted","Data":"6011dc90411ae479974801023fdc600ffd10bab42a58bac5f5d5dc6c83e12955"} Jan 28 11:36:08 crc kubenswrapper[4804]: I0128 11:36:08.453876 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-rfhfx" Jan 28 11:36:09 crc kubenswrapper[4804]: I0128 11:36:09.338635 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-kcvj8" Jan 28 11:36:09 crc kubenswrapper[4804]: I0128 11:36:09.356264 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5kdlz" event={"ID":"45631116-4b02-448f-9158-18eaae682d9d","Type":"ContainerStarted","Data":"650e496c663f1426d1d86565d331dab6640049b8952d0c419bc1d9d5110c5396"} Jan 28 11:36:09 crc kubenswrapper[4804]: I0128 11:36:09.356324 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5kdlz" event={"ID":"45631116-4b02-448f-9158-18eaae682d9d","Type":"ContainerStarted","Data":"7055e999a484a7cec751a69f9b7db66f5e57dc33d929f7107814450d261a314e"} Jan 28 11:36:09 crc kubenswrapper[4804]: I0128 11:36:09.356343 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-5kdlz" event={"ID":"45631116-4b02-448f-9158-18eaae682d9d","Type":"ContainerStarted","Data":"344cea65acad0cb986c6dc932bea04a96128153e6cb96fa687eaecd709be2622"} Jan 28 11:36:09 crc kubenswrapper[4804]: I0128 11:36:09.356631 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:36:09 crc kubenswrapper[4804]: I0128 11:36:09.389020 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-5kdlz" podStartSLOduration=5.524881829 podStartE2EDuration="12.388988108s" podCreationTimestamp="2026-01-28 11:35:57 +0000 UTC" firstStartedPulling="2026-01-28 11:35:57.915897911 +0000 UTC m=+833.710777895" lastFinishedPulling="2026-01-28 11:36:04.78000419 +0000 UTC m=+840.574884174" observedRunningTime="2026-01-28 11:36:09.387255112 +0000 UTC m=+845.182135106" watchObservedRunningTime="2026-01-28 11:36:09.388988108 +0000 UTC m=+845.183868162" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.062002 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6"] Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.063113 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.070886 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.085560 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6"] Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.202730 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.202812 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.202925 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j7psz\" (UniqueName: \"kubernetes.io/projected/237e3a43-08f5-4b3c-864f-d5f90276bac3-kube-api-access-j7psz\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.304691 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j7psz\" (UniqueName: \"kubernetes.io/projected/237e3a43-08f5-4b3c-864f-d5f90276bac3-kube-api-access-j7psz\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.304768 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.304810 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.305368 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.305389 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.341712 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j7psz\" (UniqueName: \"kubernetes.io/projected/237e3a43-08f5-4b3c-864f-d5f90276bac3-kube-api-access-j7psz\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.389444 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:11 crc kubenswrapper[4804]: I0128 11:36:11.611929 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6"] Jan 28 11:36:12 crc kubenswrapper[4804]: I0128 11:36:12.383642 4804 generic.go:334] "Generic (PLEG): container finished" podID="237e3a43-08f5-4b3c-864f-d5f90276bac3" containerID="7756bceb2367830456ddd3c06e76b6e7a3fb386504ae8ce9d485c354f3ef9ad4" exitCode=0 Jan 28 11:36:12 crc kubenswrapper[4804]: I0128 11:36:12.383716 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" event={"ID":"237e3a43-08f5-4b3c-864f-d5f90276bac3","Type":"ContainerDied","Data":"7756bceb2367830456ddd3c06e76b6e7a3fb386504ae8ce9d485c354f3ef9ad4"} Jan 28 11:36:12 crc kubenswrapper[4804]: I0128 11:36:12.384120 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" event={"ID":"237e3a43-08f5-4b3c-864f-d5f90276bac3","Type":"ContainerStarted","Data":"c063c8e9f123410dd7cf39cd3b431953363b9ca9a5df4c05e11bc516660a52c4"} Jan 28 11:36:12 crc kubenswrapper[4804]: I0128 11:36:12.762085 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:36:12 crc kubenswrapper[4804]: I0128 11:36:12.804403 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:36:17 crc kubenswrapper[4804]: I0128 11:36:17.764425 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-5kdlz" Jan 28 11:36:18 crc kubenswrapper[4804]: I0128 11:36:18.049680 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-cvlt6" Jan 28 11:36:19 crc kubenswrapper[4804]: I0128 11:36:19.448141 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" event={"ID":"237e3a43-08f5-4b3c-864f-d5f90276bac3","Type":"ContainerStarted","Data":"2aab0e70d1c51d20c2619151f526eb844d17b978419b20a7f3f0c30e8e80372c"} Jan 28 11:36:20 crc kubenswrapper[4804]: I0128 11:36:20.456317 4804 generic.go:334] "Generic (PLEG): container finished" podID="237e3a43-08f5-4b3c-864f-d5f90276bac3" containerID="2aab0e70d1c51d20c2619151f526eb844d17b978419b20a7f3f0c30e8e80372c" exitCode=0 Jan 28 11:36:20 crc kubenswrapper[4804]: I0128 11:36:20.456447 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" event={"ID":"237e3a43-08f5-4b3c-864f-d5f90276bac3","Type":"ContainerDied","Data":"2aab0e70d1c51d20c2619151f526eb844d17b978419b20a7f3f0c30e8e80372c"} Jan 28 11:36:21 crc kubenswrapper[4804]: I0128 11:36:21.468076 4804 generic.go:334] "Generic (PLEG): container finished" podID="237e3a43-08f5-4b3c-864f-d5f90276bac3" containerID="eec4f650f94b2fd54830ba513210773442ce9f43933229f2a03fb53026c41e83" exitCode=0 Jan 28 11:36:21 crc kubenswrapper[4804]: I0128 11:36:21.468143 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" event={"ID":"237e3a43-08f5-4b3c-864f-d5f90276bac3","Type":"ContainerDied","Data":"eec4f650f94b2fd54830ba513210773442ce9f43933229f2a03fb53026c41e83"} Jan 28 11:36:22 crc kubenswrapper[4804]: I0128 11:36:22.771262 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:22 crc kubenswrapper[4804]: I0128 11:36:22.893014 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-util\") pod \"237e3a43-08f5-4b3c-864f-d5f90276bac3\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " Jan 28 11:36:22 crc kubenswrapper[4804]: I0128 11:36:22.893092 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j7psz\" (UniqueName: \"kubernetes.io/projected/237e3a43-08f5-4b3c-864f-d5f90276bac3-kube-api-access-j7psz\") pod \"237e3a43-08f5-4b3c-864f-d5f90276bac3\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " Jan 28 11:36:22 crc kubenswrapper[4804]: I0128 11:36:22.893216 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-bundle\") pod \"237e3a43-08f5-4b3c-864f-d5f90276bac3\" (UID: \"237e3a43-08f5-4b3c-864f-d5f90276bac3\") " Jan 28 11:36:22 crc kubenswrapper[4804]: I0128 11:36:22.894095 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-bundle" (OuterVolumeSpecName: "bundle") pod "237e3a43-08f5-4b3c-864f-d5f90276bac3" (UID: "237e3a43-08f5-4b3c-864f-d5f90276bac3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:36:22 crc kubenswrapper[4804]: I0128 11:36:22.904357 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-util" (OuterVolumeSpecName: "util") pod "237e3a43-08f5-4b3c-864f-d5f90276bac3" (UID: "237e3a43-08f5-4b3c-864f-d5f90276bac3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:36:22 crc kubenswrapper[4804]: I0128 11:36:22.906520 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/237e3a43-08f5-4b3c-864f-d5f90276bac3-kube-api-access-j7psz" (OuterVolumeSpecName: "kube-api-access-j7psz") pod "237e3a43-08f5-4b3c-864f-d5f90276bac3" (UID: "237e3a43-08f5-4b3c-864f-d5f90276bac3"). InnerVolumeSpecName "kube-api-access-j7psz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:36:22 crc kubenswrapper[4804]: I0128 11:36:22.995148 4804 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-util\") on node \"crc\" DevicePath \"\"" Jan 28 11:36:22 crc kubenswrapper[4804]: I0128 11:36:22.995309 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j7psz\" (UniqueName: \"kubernetes.io/projected/237e3a43-08f5-4b3c-864f-d5f90276bac3-kube-api-access-j7psz\") on node \"crc\" DevicePath \"\"" Jan 28 11:36:22 crc kubenswrapper[4804]: I0128 11:36:22.995382 4804 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/237e3a43-08f5-4b3c-864f-d5f90276bac3-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:36:23 crc kubenswrapper[4804]: I0128 11:36:23.483190 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" event={"ID":"237e3a43-08f5-4b3c-864f-d5f90276bac3","Type":"ContainerDied","Data":"c063c8e9f123410dd7cf39cd3b431953363b9ca9a5df4c05e11bc516660a52c4"} Jan 28 11:36:23 crc kubenswrapper[4804]: I0128 11:36:23.483227 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c063c8e9f123410dd7cf39cd3b431953363b9ca9a5df4c05e11bc516660a52c4" Jan 28 11:36:23 crc kubenswrapper[4804]: I0128 11:36:23.483250 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.378477 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl"] Jan 28 11:36:29 crc kubenswrapper[4804]: E0128 11:36:29.379503 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237e3a43-08f5-4b3c-864f-d5f90276bac3" containerName="util" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.379519 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="237e3a43-08f5-4b3c-864f-d5f90276bac3" containerName="util" Jan 28 11:36:29 crc kubenswrapper[4804]: E0128 11:36:29.379544 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237e3a43-08f5-4b3c-864f-d5f90276bac3" containerName="pull" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.379552 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="237e3a43-08f5-4b3c-864f-d5f90276bac3" containerName="pull" Jan 28 11:36:29 crc kubenswrapper[4804]: E0128 11:36:29.379563 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="237e3a43-08f5-4b3c-864f-d5f90276bac3" containerName="extract" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.379571 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="237e3a43-08f5-4b3c-864f-d5f90276bac3" containerName="extract" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.379681 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="237e3a43-08f5-4b3c-864f-d5f90276bac3" containerName="extract" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.380295 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.385866 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.386044 4804 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-98bvm" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.386241 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.393088 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl"] Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.479112 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af27b36c-f1e1-492e-9b04-3ad941908789-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nhqfl\" (UID: \"af27b36c-f1e1-492e-9b04-3ad941908789\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.479201 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4h29\" (UniqueName: \"kubernetes.io/projected/af27b36c-f1e1-492e-9b04-3ad941908789-kube-api-access-l4h29\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nhqfl\" (UID: \"af27b36c-f1e1-492e-9b04-3ad941908789\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.580432 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4h29\" (UniqueName: \"kubernetes.io/projected/af27b36c-f1e1-492e-9b04-3ad941908789-kube-api-access-l4h29\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nhqfl\" (UID: \"af27b36c-f1e1-492e-9b04-3ad941908789\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.580547 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af27b36c-f1e1-492e-9b04-3ad941908789-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nhqfl\" (UID: \"af27b36c-f1e1-492e-9b04-3ad941908789\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.581086 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/af27b36c-f1e1-492e-9b04-3ad941908789-tmp\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nhqfl\" (UID: \"af27b36c-f1e1-492e-9b04-3ad941908789\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.607417 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4h29\" (UniqueName: \"kubernetes.io/projected/af27b36c-f1e1-492e-9b04-3ad941908789-kube-api-access-l4h29\") pod \"cert-manager-operator-controller-manager-64cf6dff88-nhqfl\" (UID: \"af27b36c-f1e1-492e-9b04-3ad941908789\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl" Jan 28 11:36:29 crc kubenswrapper[4804]: I0128 11:36:29.698812 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl" Jan 28 11:36:30 crc kubenswrapper[4804]: I0128 11:36:30.158859 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl"] Jan 28 11:36:30 crc kubenswrapper[4804]: I0128 11:36:30.524579 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl" event={"ID":"af27b36c-f1e1-492e-9b04-3ad941908789","Type":"ContainerStarted","Data":"8f425d6d502752a9d6d0692a0818820c3fc46cdcd8fc11d34568aaa510ca26ad"} Jan 28 11:36:52 crc kubenswrapper[4804]: I0128 11:36:52.676005 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl" event={"ID":"af27b36c-f1e1-492e-9b04-3ad941908789","Type":"ContainerStarted","Data":"92c1db0255f69e1776c4401b6f4bef74a564dd481eae0872024fac22b4bbac3e"} Jan 28 11:36:52 crc kubenswrapper[4804]: I0128 11:36:52.709249 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64cf6dff88-nhqfl" podStartSLOduration=2.095180022 podStartE2EDuration="23.709230109s" podCreationTimestamp="2026-01-28 11:36:29 +0000 UTC" firstStartedPulling="2026-01-28 11:36:30.170301025 +0000 UTC m=+865.965181009" lastFinishedPulling="2026-01-28 11:36:51.784351102 +0000 UTC m=+887.579231096" observedRunningTime="2026-01-28 11:36:52.704440554 +0000 UTC m=+888.499320568" watchObservedRunningTime="2026-01-28 11:36:52.709230109 +0000 UTC m=+888.504110103" Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.532010 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-cjsz8"] Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.533290 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.535452 4804 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-z6tks" Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.536013 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.542533 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-cjsz8"] Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.543235 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.606499 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5d5z\" (UniqueName: \"kubernetes.io/projected/dd7c8a18-36d1-45d5-aaf5-daff9b218438-kube-api-access-n5d5z\") pod \"cert-manager-webhook-f4fb5df64-cjsz8\" (UID: \"dd7c8a18-36d1-45d5-aaf5-daff9b218438\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.606607 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dd7c8a18-36d1-45d5-aaf5-daff9b218438-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-cjsz8\" (UID: \"dd7c8a18-36d1-45d5-aaf5-daff9b218438\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.707935 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dd7c8a18-36d1-45d5-aaf5-daff9b218438-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-cjsz8\" (UID: \"dd7c8a18-36d1-45d5-aaf5-daff9b218438\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.708001 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5d5z\" (UniqueName: \"kubernetes.io/projected/dd7c8a18-36d1-45d5-aaf5-daff9b218438-kube-api-access-n5d5z\") pod \"cert-manager-webhook-f4fb5df64-cjsz8\" (UID: \"dd7c8a18-36d1-45d5-aaf5-daff9b218438\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.731544 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5d5z\" (UniqueName: \"kubernetes.io/projected/dd7c8a18-36d1-45d5-aaf5-daff9b218438-kube-api-access-n5d5z\") pod \"cert-manager-webhook-f4fb5df64-cjsz8\" (UID: \"dd7c8a18-36d1-45d5-aaf5-daff9b218438\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.752430 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/dd7c8a18-36d1-45d5-aaf5-daff9b218438-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-cjsz8\" (UID: \"dd7c8a18-36d1-45d5-aaf5-daff9b218438\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" Jan 28 11:36:54 crc kubenswrapper[4804]: I0128 11:36:54.854916 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" Jan 28 11:36:55 crc kubenswrapper[4804]: I0128 11:36:55.385552 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-cjsz8"] Jan 28 11:36:55 crc kubenswrapper[4804]: I0128 11:36:55.692288 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" event={"ID":"dd7c8a18-36d1-45d5-aaf5-daff9b218438","Type":"ContainerStarted","Data":"881073ac4e29852f574bf1e62185b39972539c646588b9ba86aa84c0868d0382"} Jan 28 11:36:56 crc kubenswrapper[4804]: I0128 11:36:56.857816 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-pgj92"] Jan 28 11:36:56 crc kubenswrapper[4804]: I0128 11:36:56.859172 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-pgj92" Jan 28 11:36:56 crc kubenswrapper[4804]: I0128 11:36:56.861088 4804 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-wfvth" Jan 28 11:36:56 crc kubenswrapper[4804]: I0128 11:36:56.879267 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-pgj92"] Jan 28 11:36:56 crc kubenswrapper[4804]: I0128 11:36:56.947466 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ggln\" (UniqueName: \"kubernetes.io/projected/47a0c933-7194-403d-8345-446cc9941fa5-kube-api-access-4ggln\") pod \"cert-manager-cainjector-855d9ccff4-pgj92\" (UID: \"47a0c933-7194-403d-8345-446cc9941fa5\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-pgj92" Jan 28 11:36:56 crc kubenswrapper[4804]: I0128 11:36:56.947686 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47a0c933-7194-403d-8345-446cc9941fa5-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-pgj92\" (UID: \"47a0c933-7194-403d-8345-446cc9941fa5\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-pgj92" Jan 28 11:36:57 crc kubenswrapper[4804]: I0128 11:36:57.048869 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4ggln\" (UniqueName: \"kubernetes.io/projected/47a0c933-7194-403d-8345-446cc9941fa5-kube-api-access-4ggln\") pod \"cert-manager-cainjector-855d9ccff4-pgj92\" (UID: \"47a0c933-7194-403d-8345-446cc9941fa5\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-pgj92" Jan 28 11:36:57 crc kubenswrapper[4804]: I0128 11:36:57.048980 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47a0c933-7194-403d-8345-446cc9941fa5-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-pgj92\" (UID: \"47a0c933-7194-403d-8345-446cc9941fa5\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-pgj92" Jan 28 11:36:57 crc kubenswrapper[4804]: I0128 11:36:57.079207 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4ggln\" (UniqueName: \"kubernetes.io/projected/47a0c933-7194-403d-8345-446cc9941fa5-kube-api-access-4ggln\") pod \"cert-manager-cainjector-855d9ccff4-pgj92\" (UID: \"47a0c933-7194-403d-8345-446cc9941fa5\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-pgj92" Jan 28 11:36:57 crc kubenswrapper[4804]: I0128 11:36:57.081708 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/47a0c933-7194-403d-8345-446cc9941fa5-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-pgj92\" (UID: \"47a0c933-7194-403d-8345-446cc9941fa5\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-pgj92" Jan 28 11:36:57 crc kubenswrapper[4804]: I0128 11:36:57.180620 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-pgj92" Jan 28 11:36:57 crc kubenswrapper[4804]: I0128 11:36:57.631704 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-pgj92"] Jan 28 11:36:57 crc kubenswrapper[4804]: W0128 11:36:57.640236 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47a0c933_7194_403d_8345_446cc9941fa5.slice/crio-020e830cecdf8fc12d6a7426c472cc31ff9ed052491772cd21682d2a33f62e8a WatchSource:0}: Error finding container 020e830cecdf8fc12d6a7426c472cc31ff9ed052491772cd21682d2a33f62e8a: Status 404 returned error can't find the container with id 020e830cecdf8fc12d6a7426c472cc31ff9ed052491772cd21682d2a33f62e8a Jan 28 11:36:57 crc kubenswrapper[4804]: I0128 11:36:57.709549 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-pgj92" event={"ID":"47a0c933-7194-403d-8345-446cc9941fa5","Type":"ContainerStarted","Data":"020e830cecdf8fc12d6a7426c472cc31ff9ed052491772cd21682d2a33f62e8a"} Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.618268 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-6lmzp"] Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.619937 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.631478 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6lmzp"] Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.706486 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-utilities\") pod \"redhat-marketplace-6lmzp\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.706571 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-catalog-content\") pod \"redhat-marketplace-6lmzp\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.706626 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxhzf\" (UniqueName: \"kubernetes.io/projected/87376851-1792-4b24-bc20-c87628a93a38-kube-api-access-mxhzf\") pod \"redhat-marketplace-6lmzp\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.807617 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxhzf\" (UniqueName: \"kubernetes.io/projected/87376851-1792-4b24-bc20-c87628a93a38-kube-api-access-mxhzf\") pod \"redhat-marketplace-6lmzp\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.807752 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-utilities\") pod \"redhat-marketplace-6lmzp\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.807833 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-catalog-content\") pod \"redhat-marketplace-6lmzp\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.808253 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-utilities\") pod \"redhat-marketplace-6lmzp\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.808322 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-catalog-content\") pod \"redhat-marketplace-6lmzp\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.826587 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxhzf\" (UniqueName: \"kubernetes.io/projected/87376851-1792-4b24-bc20-c87628a93a38-kube-api-access-mxhzf\") pod \"redhat-marketplace-6lmzp\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:01 crc kubenswrapper[4804]: I0128 11:37:01.949032 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:03 crc kubenswrapper[4804]: I0128 11:37:03.456455 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-6lmzp"] Jan 28 11:37:04 crc kubenswrapper[4804]: I0128 11:37:04.751907 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" event={"ID":"dd7c8a18-36d1-45d5-aaf5-daff9b218438","Type":"ContainerStarted","Data":"a0f93ecfab39a23449a9fa3d174d5b6f39095d0d316bca435ba12fb8588de85e"} Jan 28 11:37:04 crc kubenswrapper[4804]: I0128 11:37:04.752234 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" Jan 28 11:37:04 crc kubenswrapper[4804]: I0128 11:37:04.754487 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-pgj92" event={"ID":"47a0c933-7194-403d-8345-446cc9941fa5","Type":"ContainerStarted","Data":"d52e56df8738f57e09c7669efa58b00e365fe1cc5fca928e26b7fb002d00e1fc"} Jan 28 11:37:04 crc kubenswrapper[4804]: I0128 11:37:04.757049 4804 generic.go:334] "Generic (PLEG): container finished" podID="87376851-1792-4b24-bc20-c87628a93a38" containerID="e7f7828fc49fafad4276265249f367a1e6aaacee2f33d77ec07d15244b04df6c" exitCode=0 Jan 28 11:37:04 crc kubenswrapper[4804]: I0128 11:37:04.757129 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6lmzp" event={"ID":"87376851-1792-4b24-bc20-c87628a93a38","Type":"ContainerDied","Data":"e7f7828fc49fafad4276265249f367a1e6aaacee2f33d77ec07d15244b04df6c"} Jan 28 11:37:04 crc kubenswrapper[4804]: I0128 11:37:04.757169 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6lmzp" event={"ID":"87376851-1792-4b24-bc20-c87628a93a38","Type":"ContainerStarted","Data":"1c1607d09c480ca59ed3afc9076abed7f9c2ddd81e561ad505762d69e617b5eb"} Jan 28 11:37:04 crc kubenswrapper[4804]: I0128 11:37:04.771282 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" podStartSLOduration=2.17287153 podStartE2EDuration="10.771263041s" podCreationTimestamp="2026-01-28 11:36:54 +0000 UTC" firstStartedPulling="2026-01-28 11:36:55.39972256 +0000 UTC m=+891.194602544" lastFinishedPulling="2026-01-28 11:37:03.998114071 +0000 UTC m=+899.792994055" observedRunningTime="2026-01-28 11:37:04.768696319 +0000 UTC m=+900.563576303" watchObservedRunningTime="2026-01-28 11:37:04.771263041 +0000 UTC m=+900.566143025" Jan 28 11:37:04 crc kubenswrapper[4804]: I0128 11:37:04.808248 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-pgj92" podStartSLOduration=2.420131205 podStartE2EDuration="8.808222952s" podCreationTimestamp="2026-01-28 11:36:56 +0000 UTC" firstStartedPulling="2026-01-28 11:36:57.645497087 +0000 UTC m=+893.440377071" lastFinishedPulling="2026-01-28 11:37:04.033588834 +0000 UTC m=+899.828468818" observedRunningTime="2026-01-28 11:37:04.800607927 +0000 UTC m=+900.595487911" watchObservedRunningTime="2026-01-28 11:37:04.808222952 +0000 UTC m=+900.603102976" Jan 28 11:37:06 crc kubenswrapper[4804]: I0128 11:37:06.772060 4804 generic.go:334] "Generic (PLEG): container finished" podID="87376851-1792-4b24-bc20-c87628a93a38" containerID="567b54ed66e76a6ea7be36e49637daa0dc44a66420682cdf75f69fffd976ad88" exitCode=0 Jan 28 11:37:06 crc kubenswrapper[4804]: I0128 11:37:06.772135 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6lmzp" event={"ID":"87376851-1792-4b24-bc20-c87628a93a38","Type":"ContainerDied","Data":"567b54ed66e76a6ea7be36e49637daa0dc44a66420682cdf75f69fffd976ad88"} Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.502438 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-hkwds"] Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.503449 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-hkwds" Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.508068 4804 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-w2npd" Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.510246 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-hkwds"] Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.598426 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4da2c74c-883d-4690-bb94-a34b198ccf89-bound-sa-token\") pod \"cert-manager-86cb77c54b-hkwds\" (UID: \"4da2c74c-883d-4690-bb94-a34b198ccf89\") " pod="cert-manager/cert-manager-86cb77c54b-hkwds" Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.598527 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26qkd\" (UniqueName: \"kubernetes.io/projected/4da2c74c-883d-4690-bb94-a34b198ccf89-kube-api-access-26qkd\") pod \"cert-manager-86cb77c54b-hkwds\" (UID: \"4da2c74c-883d-4690-bb94-a34b198ccf89\") " pod="cert-manager/cert-manager-86cb77c54b-hkwds" Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.699464 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26qkd\" (UniqueName: \"kubernetes.io/projected/4da2c74c-883d-4690-bb94-a34b198ccf89-kube-api-access-26qkd\") pod \"cert-manager-86cb77c54b-hkwds\" (UID: \"4da2c74c-883d-4690-bb94-a34b198ccf89\") " pod="cert-manager/cert-manager-86cb77c54b-hkwds" Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.699588 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4da2c74c-883d-4690-bb94-a34b198ccf89-bound-sa-token\") pod \"cert-manager-86cb77c54b-hkwds\" (UID: \"4da2c74c-883d-4690-bb94-a34b198ccf89\") " pod="cert-manager/cert-manager-86cb77c54b-hkwds" Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.720692 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4da2c74c-883d-4690-bb94-a34b198ccf89-bound-sa-token\") pod \"cert-manager-86cb77c54b-hkwds\" (UID: \"4da2c74c-883d-4690-bb94-a34b198ccf89\") " pod="cert-manager/cert-manager-86cb77c54b-hkwds" Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.720939 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26qkd\" (UniqueName: \"kubernetes.io/projected/4da2c74c-883d-4690-bb94-a34b198ccf89-kube-api-access-26qkd\") pod \"cert-manager-86cb77c54b-hkwds\" (UID: \"4da2c74c-883d-4690-bb94-a34b198ccf89\") " pod="cert-manager/cert-manager-86cb77c54b-hkwds" Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.783647 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6lmzp" event={"ID":"87376851-1792-4b24-bc20-c87628a93a38","Type":"ContainerStarted","Data":"cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd"} Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.807979 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-6lmzp" podStartSLOduration=4.188676745 podStartE2EDuration="6.807959942s" podCreationTimestamp="2026-01-28 11:37:01 +0000 UTC" firstStartedPulling="2026-01-28 11:37:04.758186461 +0000 UTC m=+900.553066445" lastFinishedPulling="2026-01-28 11:37:07.377469648 +0000 UTC m=+903.172349642" observedRunningTime="2026-01-28 11:37:07.803099896 +0000 UTC m=+903.597979880" watchObservedRunningTime="2026-01-28 11:37:07.807959942 +0000 UTC m=+903.602839926" Jan 28 11:37:07 crc kubenswrapper[4804]: I0128 11:37:07.823264 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-hkwds" Jan 28 11:37:08 crc kubenswrapper[4804]: I0128 11:37:08.104722 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-hkwds"] Jan 28 11:37:08 crc kubenswrapper[4804]: I0128 11:37:08.791113 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-hkwds" event={"ID":"4da2c74c-883d-4690-bb94-a34b198ccf89","Type":"ContainerStarted","Data":"192d106a15fd447a9319e49bd02a3c7368a5b41e2631d8be6d296ba0ec46acc3"} Jan 28 11:37:08 crc kubenswrapper[4804]: I0128 11:37:08.791465 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-hkwds" event={"ID":"4da2c74c-883d-4690-bb94-a34b198ccf89","Type":"ContainerStarted","Data":"d389fd4a1afc18455e821132fefac4ed2ebc9ca4ed1818da43afbdce6d58ae65"} Jan 28 11:37:09 crc kubenswrapper[4804]: I0128 11:37:09.858747 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-cjsz8" Jan 28 11:37:09 crc kubenswrapper[4804]: I0128 11:37:09.878402 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-hkwds" podStartSLOduration=2.878369222 podStartE2EDuration="2.878369222s" podCreationTimestamp="2026-01-28 11:37:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:37:08.808051921 +0000 UTC m=+904.602931915" watchObservedRunningTime="2026-01-28 11:37:09.878369222 +0000 UTC m=+905.673249206" Jan 28 11:37:11 crc kubenswrapper[4804]: I0128 11:37:11.949551 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:11 crc kubenswrapper[4804]: I0128 11:37:11.949600 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:11 crc kubenswrapper[4804]: I0128 11:37:11.992686 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:12 crc kubenswrapper[4804]: I0128 11:37:12.582388 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:37:12 crc kubenswrapper[4804]: I0128 11:37:12.582457 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:37:12 crc kubenswrapper[4804]: I0128 11:37:12.850119 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:12 crc kubenswrapper[4804]: I0128 11:37:12.893355 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6lmzp"] Jan 28 11:37:14 crc kubenswrapper[4804]: I0128 11:37:14.825738 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-6lmzp" podUID="87376851-1792-4b24-bc20-c87628a93a38" containerName="registry-server" containerID="cri-o://cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd" gracePeriod=2 Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.793429 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.841092 4804 generic.go:334] "Generic (PLEG): container finished" podID="87376851-1792-4b24-bc20-c87628a93a38" containerID="cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd" exitCode=0 Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.841155 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6lmzp" event={"ID":"87376851-1792-4b24-bc20-c87628a93a38","Type":"ContainerDied","Data":"cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd"} Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.841196 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-6lmzp" event={"ID":"87376851-1792-4b24-bc20-c87628a93a38","Type":"ContainerDied","Data":"1c1607d09c480ca59ed3afc9076abed7f9c2ddd81e561ad505762d69e617b5eb"} Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.841221 4804 scope.go:117] "RemoveContainer" containerID="cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.841231 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-6lmzp" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.868553 4804 scope.go:117] "RemoveContainer" containerID="567b54ed66e76a6ea7be36e49637daa0dc44a66420682cdf75f69fffd976ad88" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.894377 4804 scope.go:117] "RemoveContainer" containerID="e7f7828fc49fafad4276265249f367a1e6aaacee2f33d77ec07d15244b04df6c" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.908230 4804 scope.go:117] "RemoveContainer" containerID="cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd" Jan 28 11:37:15 crc kubenswrapper[4804]: E0128 11:37:15.908731 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd\": container with ID starting with cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd not found: ID does not exist" containerID="cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.908777 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd"} err="failed to get container status \"cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd\": rpc error: code = NotFound desc = could not find container \"cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd\": container with ID starting with cc22cc2811910ded1e619131faba7703cf7ae905ac6ef86e8ab1fb7b41b877bd not found: ID does not exist" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.908804 4804 scope.go:117] "RemoveContainer" containerID="567b54ed66e76a6ea7be36e49637daa0dc44a66420682cdf75f69fffd976ad88" Jan 28 11:37:15 crc kubenswrapper[4804]: E0128 11:37:15.909110 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"567b54ed66e76a6ea7be36e49637daa0dc44a66420682cdf75f69fffd976ad88\": container with ID starting with 567b54ed66e76a6ea7be36e49637daa0dc44a66420682cdf75f69fffd976ad88 not found: ID does not exist" containerID="567b54ed66e76a6ea7be36e49637daa0dc44a66420682cdf75f69fffd976ad88" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.909148 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"567b54ed66e76a6ea7be36e49637daa0dc44a66420682cdf75f69fffd976ad88"} err="failed to get container status \"567b54ed66e76a6ea7be36e49637daa0dc44a66420682cdf75f69fffd976ad88\": rpc error: code = NotFound desc = could not find container \"567b54ed66e76a6ea7be36e49637daa0dc44a66420682cdf75f69fffd976ad88\": container with ID starting with 567b54ed66e76a6ea7be36e49637daa0dc44a66420682cdf75f69fffd976ad88 not found: ID does not exist" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.909174 4804 scope.go:117] "RemoveContainer" containerID="e7f7828fc49fafad4276265249f367a1e6aaacee2f33d77ec07d15244b04df6c" Jan 28 11:37:15 crc kubenswrapper[4804]: E0128 11:37:15.909577 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e7f7828fc49fafad4276265249f367a1e6aaacee2f33d77ec07d15244b04df6c\": container with ID starting with e7f7828fc49fafad4276265249f367a1e6aaacee2f33d77ec07d15244b04df6c not found: ID does not exist" containerID="e7f7828fc49fafad4276265249f367a1e6aaacee2f33d77ec07d15244b04df6c" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.909621 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e7f7828fc49fafad4276265249f367a1e6aaacee2f33d77ec07d15244b04df6c"} err="failed to get container status \"e7f7828fc49fafad4276265249f367a1e6aaacee2f33d77ec07d15244b04df6c\": rpc error: code = NotFound desc = could not find container \"e7f7828fc49fafad4276265249f367a1e6aaacee2f33d77ec07d15244b04df6c\": container with ID starting with e7f7828fc49fafad4276265249f367a1e6aaacee2f33d77ec07d15244b04df6c not found: ID does not exist" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.911077 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxhzf\" (UniqueName: \"kubernetes.io/projected/87376851-1792-4b24-bc20-c87628a93a38-kube-api-access-mxhzf\") pod \"87376851-1792-4b24-bc20-c87628a93a38\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.911148 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-utilities\") pod \"87376851-1792-4b24-bc20-c87628a93a38\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.911177 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-catalog-content\") pod \"87376851-1792-4b24-bc20-c87628a93a38\" (UID: \"87376851-1792-4b24-bc20-c87628a93a38\") " Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.912467 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-utilities" (OuterVolumeSpecName: "utilities") pod "87376851-1792-4b24-bc20-c87628a93a38" (UID: "87376851-1792-4b24-bc20-c87628a93a38"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.916938 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87376851-1792-4b24-bc20-c87628a93a38-kube-api-access-mxhzf" (OuterVolumeSpecName: "kube-api-access-mxhzf") pod "87376851-1792-4b24-bc20-c87628a93a38" (UID: "87376851-1792-4b24-bc20-c87628a93a38"). InnerVolumeSpecName "kube-api-access-mxhzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:37:15 crc kubenswrapper[4804]: I0128 11:37:15.939927 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "87376851-1792-4b24-bc20-c87628a93a38" (UID: "87376851-1792-4b24-bc20-c87628a93a38"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.012384 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.012423 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxhzf\" (UniqueName: \"kubernetes.io/projected/87376851-1792-4b24-bc20-c87628a93a38-kube-api-access-mxhzf\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.012440 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/87376851-1792-4b24-bc20-c87628a93a38-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.031377 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-nslxx"] Jan 28 11:37:16 crc kubenswrapper[4804]: E0128 11:37:16.031741 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87376851-1792-4b24-bc20-c87628a93a38" containerName="extract-utilities" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.031764 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="87376851-1792-4b24-bc20-c87628a93a38" containerName="extract-utilities" Jan 28 11:37:16 crc kubenswrapper[4804]: E0128 11:37:16.031778 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87376851-1792-4b24-bc20-c87628a93a38" containerName="registry-server" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.031790 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="87376851-1792-4b24-bc20-c87628a93a38" containerName="registry-server" Jan 28 11:37:16 crc kubenswrapper[4804]: E0128 11:37:16.031823 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="87376851-1792-4b24-bc20-c87628a93a38" containerName="extract-content" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.031835 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="87376851-1792-4b24-bc20-c87628a93a38" containerName="extract-content" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.032043 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="87376851-1792-4b24-bc20-c87628a93a38" containerName="registry-server" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.032855 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nslxx" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.034605 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-tf9rf" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.037055 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.037154 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.040806 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nslxx"] Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.171619 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-6lmzp"] Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.175905 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-6lmzp"] Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.213766 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tflf6\" (UniqueName: \"kubernetes.io/projected/905df814-fa43-4ef1-b5e6-cfa26ec65547-kube-api-access-tflf6\") pod \"openstack-operator-index-nslxx\" (UID: \"905df814-fa43-4ef1-b5e6-cfa26ec65547\") " pod="openstack-operators/openstack-operator-index-nslxx" Jan 28 11:37:16 crc kubenswrapper[4804]: E0128 11:37:16.236110 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87376851_1792_4b24_bc20_c87628a93a38.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod87376851_1792_4b24_bc20_c87628a93a38.slice/crio-1c1607d09c480ca59ed3afc9076abed7f9c2ddd81e561ad505762d69e617b5eb\": RecentStats: unable to find data in memory cache]" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.315697 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tflf6\" (UniqueName: \"kubernetes.io/projected/905df814-fa43-4ef1-b5e6-cfa26ec65547-kube-api-access-tflf6\") pod \"openstack-operator-index-nslxx\" (UID: \"905df814-fa43-4ef1-b5e6-cfa26ec65547\") " pod="openstack-operators/openstack-operator-index-nslxx" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.335610 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tflf6\" (UniqueName: \"kubernetes.io/projected/905df814-fa43-4ef1-b5e6-cfa26ec65547-kube-api-access-tflf6\") pod \"openstack-operator-index-nslxx\" (UID: \"905df814-fa43-4ef1-b5e6-cfa26ec65547\") " pod="openstack-operators/openstack-operator-index-nslxx" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.353255 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nslxx" Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.530272 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-nslxx"] Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.850739 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nslxx" event={"ID":"905df814-fa43-4ef1-b5e6-cfa26ec65547","Type":"ContainerStarted","Data":"09f757a13b0e21cb48db3d2e18b2c22548dbb1ce8278a04562fd84b33b66a1a7"} Jan 28 11:37:16 crc kubenswrapper[4804]: I0128 11:37:16.925692 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87376851-1792-4b24-bc20-c87628a93a38" path="/var/lib/kubelet/pods/87376851-1792-4b24-bc20-c87628a93a38/volumes" Jan 28 11:37:21 crc kubenswrapper[4804]: I0128 11:37:21.229794 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-nslxx"] Jan 28 11:37:21 crc kubenswrapper[4804]: I0128 11:37:21.837015 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-cmjpc"] Jan 28 11:37:21 crc kubenswrapper[4804]: I0128 11:37:21.838239 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cmjpc" Jan 28 11:37:21 crc kubenswrapper[4804]: I0128 11:37:21.846525 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cmjpc"] Jan 28 11:37:21 crc kubenswrapper[4804]: I0128 11:37:21.911308 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjsjm\" (UniqueName: \"kubernetes.io/projected/d2e56e8b-cbb7-4f17-88df-dbe1f92e9cec-kube-api-access-pjsjm\") pod \"openstack-operator-index-cmjpc\" (UID: \"d2e56e8b-cbb7-4f17-88df-dbe1f92e9cec\") " pod="openstack-operators/openstack-operator-index-cmjpc" Jan 28 11:37:22 crc kubenswrapper[4804]: I0128 11:37:22.012784 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjsjm\" (UniqueName: \"kubernetes.io/projected/d2e56e8b-cbb7-4f17-88df-dbe1f92e9cec-kube-api-access-pjsjm\") pod \"openstack-operator-index-cmjpc\" (UID: \"d2e56e8b-cbb7-4f17-88df-dbe1f92e9cec\") " pod="openstack-operators/openstack-operator-index-cmjpc" Jan 28 11:37:22 crc kubenswrapper[4804]: I0128 11:37:22.031278 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjsjm\" (UniqueName: \"kubernetes.io/projected/d2e56e8b-cbb7-4f17-88df-dbe1f92e9cec-kube-api-access-pjsjm\") pod \"openstack-operator-index-cmjpc\" (UID: \"d2e56e8b-cbb7-4f17-88df-dbe1f92e9cec\") " pod="openstack-operators/openstack-operator-index-cmjpc" Jan 28 11:37:22 crc kubenswrapper[4804]: I0128 11:37:22.170296 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-cmjpc" Jan 28 11:37:22 crc kubenswrapper[4804]: I0128 11:37:22.391579 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-cmjpc"] Jan 28 11:37:22 crc kubenswrapper[4804]: I0128 11:37:22.902908 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cmjpc" event={"ID":"d2e56e8b-cbb7-4f17-88df-dbe1f92e9cec","Type":"ContainerStarted","Data":"e91bc7fa7fe8a5ca3b7b1c3727492ed51bf9fa2a650d08ef0767845204bbb9ad"} Jan 28 11:37:32 crc kubenswrapper[4804]: I0128 11:37:32.962875 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-cmjpc" event={"ID":"d2e56e8b-cbb7-4f17-88df-dbe1f92e9cec","Type":"ContainerStarted","Data":"0908221b06e72200159b250f3b375731ea9d3f075be6e2dacff8316f87d4000a"} Jan 28 11:37:32 crc kubenswrapper[4804]: I0128 11:37:32.965112 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nslxx" event={"ID":"905df814-fa43-4ef1-b5e6-cfa26ec65547","Type":"ContainerStarted","Data":"ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5"} Jan 28 11:37:32 crc kubenswrapper[4804]: I0128 11:37:32.965324 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-nslxx" podUID="905df814-fa43-4ef1-b5e6-cfa26ec65547" containerName="registry-server" containerID="cri-o://ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5" gracePeriod=2 Jan 28 11:37:33 crc kubenswrapper[4804]: I0128 11:37:33.000047 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-nslxx" podStartSLOduration=1.059743727 podStartE2EDuration="17.000026223s" podCreationTimestamp="2026-01-28 11:37:16 +0000 UTC" firstStartedPulling="2026-01-28 11:37:16.538866182 +0000 UTC m=+912.333746176" lastFinishedPulling="2026-01-28 11:37:32.479148688 +0000 UTC m=+928.274028672" observedRunningTime="2026-01-28 11:37:32.996447797 +0000 UTC m=+928.791327781" watchObservedRunningTime="2026-01-28 11:37:33.000026223 +0000 UTC m=+928.794906227" Jan 28 11:37:33 crc kubenswrapper[4804]: I0128 11:37:33.005233 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-cmjpc" podStartSLOduration=1.9458477159999998 podStartE2EDuration="12.00520966s" podCreationTimestamp="2026-01-28 11:37:21 +0000 UTC" firstStartedPulling="2026-01-28 11:37:22.404028726 +0000 UTC m=+918.198908710" lastFinishedPulling="2026-01-28 11:37:32.46339067 +0000 UTC m=+928.258270654" observedRunningTime="2026-01-28 11:37:32.983551752 +0000 UTC m=+928.778431736" watchObservedRunningTime="2026-01-28 11:37:33.00520966 +0000 UTC m=+928.800089644" Jan 28 11:37:33 crc kubenswrapper[4804]: I0128 11:37:33.402895 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nslxx" Jan 28 11:37:33 crc kubenswrapper[4804]: I0128 11:37:33.572184 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tflf6\" (UniqueName: \"kubernetes.io/projected/905df814-fa43-4ef1-b5e6-cfa26ec65547-kube-api-access-tflf6\") pod \"905df814-fa43-4ef1-b5e6-cfa26ec65547\" (UID: \"905df814-fa43-4ef1-b5e6-cfa26ec65547\") " Jan 28 11:37:33 crc kubenswrapper[4804]: I0128 11:37:33.578150 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/905df814-fa43-4ef1-b5e6-cfa26ec65547-kube-api-access-tflf6" (OuterVolumeSpecName: "kube-api-access-tflf6") pod "905df814-fa43-4ef1-b5e6-cfa26ec65547" (UID: "905df814-fa43-4ef1-b5e6-cfa26ec65547"). InnerVolumeSpecName "kube-api-access-tflf6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:37:33 crc kubenswrapper[4804]: I0128 11:37:33.673866 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tflf6\" (UniqueName: \"kubernetes.io/projected/905df814-fa43-4ef1-b5e6-cfa26ec65547-kube-api-access-tflf6\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:33 crc kubenswrapper[4804]: I0128 11:37:33.973321 4804 generic.go:334] "Generic (PLEG): container finished" podID="905df814-fa43-4ef1-b5e6-cfa26ec65547" containerID="ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5" exitCode=0 Jan 28 11:37:33 crc kubenswrapper[4804]: I0128 11:37:33.973387 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-nslxx" Jan 28 11:37:33 crc kubenswrapper[4804]: I0128 11:37:33.973419 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nslxx" event={"ID":"905df814-fa43-4ef1-b5e6-cfa26ec65547","Type":"ContainerDied","Data":"ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5"} Jan 28 11:37:33 crc kubenswrapper[4804]: I0128 11:37:33.973446 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-nslxx" event={"ID":"905df814-fa43-4ef1-b5e6-cfa26ec65547","Type":"ContainerDied","Data":"09f757a13b0e21cb48db3d2e18b2c22548dbb1ce8278a04562fd84b33b66a1a7"} Jan 28 11:37:33 crc kubenswrapper[4804]: I0128 11:37:33.973462 4804 scope.go:117] "RemoveContainer" containerID="ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5" Jan 28 11:37:34 crc kubenswrapper[4804]: I0128 11:37:34.039497 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-nslxx"] Jan 28 11:37:34 crc kubenswrapper[4804]: I0128 11:37:34.042740 4804 scope.go:117] "RemoveContainer" containerID="ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5" Jan 28 11:37:34 crc kubenswrapper[4804]: E0128 11:37:34.043381 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5\": container with ID starting with ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5 not found: ID does not exist" containerID="ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5" Jan 28 11:37:34 crc kubenswrapper[4804]: I0128 11:37:34.043420 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5"} err="failed to get container status \"ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5\": rpc error: code = NotFound desc = could not find container \"ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5\": container with ID starting with ce36f74656ecb34e96f500e3c9cfbcecf03f40b2e83b95dba15357eac5c095b5 not found: ID does not exist" Jan 28 11:37:34 crc kubenswrapper[4804]: I0128 11:37:34.044832 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-nslxx"] Jan 28 11:37:34 crc kubenswrapper[4804]: I0128 11:37:34.922318 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="905df814-fa43-4ef1-b5e6-cfa26ec65547" path="/var/lib/kubelet/pods/905df814-fa43-4ef1-b5e6-cfa26ec65547/volumes" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.116556 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-gpglg"] Jan 28 11:37:36 crc kubenswrapper[4804]: E0128 11:37:36.116848 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="905df814-fa43-4ef1-b5e6-cfa26ec65547" containerName="registry-server" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.116863 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="905df814-fa43-4ef1-b5e6-cfa26ec65547" containerName="registry-server" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.117021 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="905df814-fa43-4ef1-b5e6-cfa26ec65547" containerName="registry-server" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.118060 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.132166 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gpglg"] Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.307568 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6ztx\" (UniqueName: \"kubernetes.io/projected/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-kube-api-access-n6ztx\") pod \"certified-operators-gpglg\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.307629 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-utilities\") pod \"certified-operators-gpglg\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.307651 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-catalog-content\") pod \"certified-operators-gpglg\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.408392 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6ztx\" (UniqueName: \"kubernetes.io/projected/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-kube-api-access-n6ztx\") pod \"certified-operators-gpglg\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.408666 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-utilities\") pod \"certified-operators-gpglg\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.408690 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-catalog-content\") pod \"certified-operators-gpglg\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.409178 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-catalog-content\") pod \"certified-operators-gpglg\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.409867 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-utilities\") pod \"certified-operators-gpglg\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.443825 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6ztx\" (UniqueName: \"kubernetes.io/projected/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-kube-api-access-n6ztx\") pod \"certified-operators-gpglg\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.451681 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.897774 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-gpglg"] Jan 28 11:37:36 crc kubenswrapper[4804]: W0128 11:37:36.900126 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod743d1389_d1bf_4a3d_9dd2_c5e5cbb2373b.slice/crio-47b0623a468cf75667fac9efd23f7ec477ae60e09a8412eba6f15ac6094df6ff WatchSource:0}: Error finding container 47b0623a468cf75667fac9efd23f7ec477ae60e09a8412eba6f15ac6094df6ff: Status 404 returned error can't find the container with id 47b0623a468cf75667fac9efd23f7ec477ae60e09a8412eba6f15ac6094df6ff Jan 28 11:37:36 crc kubenswrapper[4804]: I0128 11:37:36.992398 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpglg" event={"ID":"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b","Type":"ContainerStarted","Data":"47b0623a468cf75667fac9efd23f7ec477ae60e09a8412eba6f15ac6094df6ff"} Jan 28 11:37:38 crc kubenswrapper[4804]: I0128 11:37:38.001472 4804 generic.go:334] "Generic (PLEG): container finished" podID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" containerID="7d45d749b8cdbe090818d3ed2ebc0c708d8d00c0d03efa227ce9e96a8fb52a41" exitCode=0 Jan 28 11:37:38 crc kubenswrapper[4804]: I0128 11:37:38.001601 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpglg" event={"ID":"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b","Type":"ContainerDied","Data":"7d45d749b8cdbe090818d3ed2ebc0c708d8d00c0d03efa227ce9e96a8fb52a41"} Jan 28 11:37:40 crc kubenswrapper[4804]: I0128 11:37:40.018387 4804 generic.go:334] "Generic (PLEG): container finished" podID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" containerID="f6b0c30033f4def0c227b7abbe1ff73ad9a58f818595ce03f2bad78c5406c646" exitCode=0 Jan 28 11:37:40 crc kubenswrapper[4804]: I0128 11:37:40.018486 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpglg" event={"ID":"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b","Type":"ContainerDied","Data":"f6b0c30033f4def0c227b7abbe1ff73ad9a58f818595ce03f2bad78c5406c646"} Jan 28 11:37:40 crc kubenswrapper[4804]: I0128 11:37:40.903950 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vx6td"] Jan 28 11:37:40 crc kubenswrapper[4804]: I0128 11:37:40.905778 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:40 crc kubenswrapper[4804]: I0128 11:37:40.936713 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vx6td"] Jan 28 11:37:40 crc kubenswrapper[4804]: I0128 11:37:40.981353 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-utilities\") pod \"community-operators-vx6td\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:40 crc kubenswrapper[4804]: I0128 11:37:40.981472 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-catalog-content\") pod \"community-operators-vx6td\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:40 crc kubenswrapper[4804]: I0128 11:37:40.981524 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gnj9\" (UniqueName: \"kubernetes.io/projected/a97c4398-9f91-4756-998e-ffd494da9163-kube-api-access-2gnj9\") pod \"community-operators-vx6td\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:41 crc kubenswrapper[4804]: I0128 11:37:41.027337 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpglg" event={"ID":"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b","Type":"ContainerStarted","Data":"18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d"} Jan 28 11:37:41 crc kubenswrapper[4804]: I0128 11:37:41.082975 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-catalog-content\") pod \"community-operators-vx6td\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:41 crc kubenswrapper[4804]: I0128 11:37:41.083053 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gnj9\" (UniqueName: \"kubernetes.io/projected/a97c4398-9f91-4756-998e-ffd494da9163-kube-api-access-2gnj9\") pod \"community-operators-vx6td\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:41 crc kubenswrapper[4804]: I0128 11:37:41.083118 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-utilities\") pod \"community-operators-vx6td\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:41 crc kubenswrapper[4804]: I0128 11:37:41.083546 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-catalog-content\") pod \"community-operators-vx6td\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:41 crc kubenswrapper[4804]: I0128 11:37:41.083570 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-utilities\") pod \"community-operators-vx6td\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:41 crc kubenswrapper[4804]: I0128 11:37:41.102687 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gnj9\" (UniqueName: \"kubernetes.io/projected/a97c4398-9f91-4756-998e-ffd494da9163-kube-api-access-2gnj9\") pod \"community-operators-vx6td\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:41 crc kubenswrapper[4804]: I0128 11:37:41.239910 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:41 crc kubenswrapper[4804]: I0128 11:37:41.804402 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-gpglg" podStartSLOduration=3.076376749 podStartE2EDuration="5.804360268s" podCreationTimestamp="2026-01-28 11:37:36 +0000 UTC" firstStartedPulling="2026-01-28 11:37:38.003981342 +0000 UTC m=+933.798861336" lastFinishedPulling="2026-01-28 11:37:40.731964871 +0000 UTC m=+936.526844855" observedRunningTime="2026-01-28 11:37:41.048470304 +0000 UTC m=+936.843350288" watchObservedRunningTime="2026-01-28 11:37:41.804360268 +0000 UTC m=+937.599240252" Jan 28 11:37:41 crc kubenswrapper[4804]: I0128 11:37:41.806040 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vx6td"] Jan 28 11:37:42 crc kubenswrapper[4804]: I0128 11:37:42.034280 4804 generic.go:334] "Generic (PLEG): container finished" podID="a97c4398-9f91-4756-998e-ffd494da9163" containerID="d24bdf38c2ae0d9aa7afcd4df2208cd063d6380b57d496ed6102a48ceb575f6d" exitCode=0 Jan 28 11:37:42 crc kubenswrapper[4804]: I0128 11:37:42.034337 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx6td" event={"ID":"a97c4398-9f91-4756-998e-ffd494da9163","Type":"ContainerDied","Data":"d24bdf38c2ae0d9aa7afcd4df2208cd063d6380b57d496ed6102a48ceb575f6d"} Jan 28 11:37:42 crc kubenswrapper[4804]: I0128 11:37:42.034410 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx6td" event={"ID":"a97c4398-9f91-4756-998e-ffd494da9163","Type":"ContainerStarted","Data":"e7c74f09e0faa8b24cdcfbf5befa9f8e319aac43fb5b70dd37852eec57f84da2"} Jan 28 11:37:42 crc kubenswrapper[4804]: I0128 11:37:42.171370 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-cmjpc" Jan 28 11:37:42 crc kubenswrapper[4804]: I0128 11:37:42.171429 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-cmjpc" Jan 28 11:37:42 crc kubenswrapper[4804]: I0128 11:37:42.212432 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-cmjpc" Jan 28 11:37:42 crc kubenswrapper[4804]: I0128 11:37:42.582495 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:37:42 crc kubenswrapper[4804]: I0128 11:37:42.582608 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:37:43 crc kubenswrapper[4804]: I0128 11:37:43.067164 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-cmjpc" Jan 28 11:37:44 crc kubenswrapper[4804]: I0128 11:37:44.060043 4804 generic.go:334] "Generic (PLEG): container finished" podID="a97c4398-9f91-4756-998e-ffd494da9163" containerID="bb0a30930c53cbd838eba75e98440b18d18245af3b3e1d63a9c2fb93b9f87213" exitCode=0 Jan 28 11:37:44 crc kubenswrapper[4804]: I0128 11:37:44.060495 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx6td" event={"ID":"a97c4398-9f91-4756-998e-ffd494da9163","Type":"ContainerDied","Data":"bb0a30930c53cbd838eba75e98440b18d18245af3b3e1d63a9c2fb93b9f87213"} Jan 28 11:37:45 crc kubenswrapper[4804]: I0128 11:37:45.931711 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s"] Jan 28 11:37:45 crc kubenswrapper[4804]: I0128 11:37:45.933197 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:45 crc kubenswrapper[4804]: I0128 11:37:45.935089 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-bjqwr" Jan 28 11:37:45 crc kubenswrapper[4804]: I0128 11:37:45.954423 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s"] Jan 28 11:37:45 crc kubenswrapper[4804]: I0128 11:37:45.966297 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-bundle\") pod \"805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:45 crc kubenswrapper[4804]: I0128 11:37:45.966427 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h9sm\" (UniqueName: \"kubernetes.io/projected/490a3033-f3bb-4a92-a03e-03ada6af8280-kube-api-access-2h9sm\") pod \"805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:45 crc kubenswrapper[4804]: I0128 11:37:45.966454 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-util\") pod \"805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.067824 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h9sm\" (UniqueName: \"kubernetes.io/projected/490a3033-f3bb-4a92-a03e-03ada6af8280-kube-api-access-2h9sm\") pod \"805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.067875 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-util\") pod \"805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.067989 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-bundle\") pod \"805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.068531 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-bundle\") pod \"805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.068567 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-util\") pod \"805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.091481 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx6td" event={"ID":"a97c4398-9f91-4756-998e-ffd494da9163","Type":"ContainerStarted","Data":"d2366c6c1c4339c2d0fe014cc4f4449d3a59c88fa9951a02f3ae19c398b19677"} Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.093345 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h9sm\" (UniqueName: \"kubernetes.io/projected/490a3033-f3bb-4a92-a03e-03ada6af8280-kube-api-access-2h9sm\") pod \"805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.113753 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vx6td" podStartSLOduration=3.251817885 podStartE2EDuration="6.113730127s" podCreationTimestamp="2026-01-28 11:37:40 +0000 UTC" firstStartedPulling="2026-01-28 11:37:42.035679028 +0000 UTC m=+937.830559012" lastFinishedPulling="2026-01-28 11:37:44.89759127 +0000 UTC m=+940.692471254" observedRunningTime="2026-01-28 11:37:46.107771715 +0000 UTC m=+941.902651709" watchObservedRunningTime="2026-01-28 11:37:46.113730127 +0000 UTC m=+941.908610111" Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.248118 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.451843 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.451907 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.477696 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s"] Jan 28 11:37:46 crc kubenswrapper[4804]: W0128 11:37:46.490078 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod490a3033_f3bb_4a92_a03e_03ada6af8280.slice/crio-d0e01f91c5e24cd368a227381f9fc701e5a7edd94e1668950f51adc4749fca17 WatchSource:0}: Error finding container d0e01f91c5e24cd368a227381f9fc701e5a7edd94e1668950f51adc4749fca17: Status 404 returned error can't find the container with id d0e01f91c5e24cd368a227381f9fc701e5a7edd94e1668950f51adc4749fca17 Jan 28 11:37:46 crc kubenswrapper[4804]: I0128 11:37:46.514688 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:47 crc kubenswrapper[4804]: I0128 11:37:47.101262 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" event={"ID":"490a3033-f3bb-4a92-a03e-03ada6af8280","Type":"ContainerStarted","Data":"d0e01f91c5e24cd368a227381f9fc701e5a7edd94e1668950f51adc4749fca17"} Jan 28 11:37:47 crc kubenswrapper[4804]: I0128 11:37:47.154660 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:48 crc kubenswrapper[4804]: I0128 11:37:48.110320 4804 generic.go:334] "Generic (PLEG): container finished" podID="490a3033-f3bb-4a92-a03e-03ada6af8280" containerID="2ee4796032efcf54d68f75c9e1e04544636a0c96d591e322215de56136c937f0" exitCode=0 Jan 28 11:37:48 crc kubenswrapper[4804]: I0128 11:37:48.110368 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" event={"ID":"490a3033-f3bb-4a92-a03e-03ada6af8280","Type":"ContainerDied","Data":"2ee4796032efcf54d68f75c9e1e04544636a0c96d591e322215de56136c937f0"} Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.086094 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gpglg"] Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.117271 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-gpglg" podUID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" containerName="registry-server" containerID="cri-o://18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d" gracePeriod=2 Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.618047 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.720698 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6ztx\" (UniqueName: \"kubernetes.io/projected/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-kube-api-access-n6ztx\") pod \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.720811 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-catalog-content\") pod \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.720969 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-utilities\") pod \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\" (UID: \"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b\") " Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.721688 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-utilities" (OuterVolumeSpecName: "utilities") pod "743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" (UID: "743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.726730 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-kube-api-access-n6ztx" (OuterVolumeSpecName: "kube-api-access-n6ztx") pod "743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" (UID: "743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b"). InnerVolumeSpecName "kube-api-access-n6ztx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.772131 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" (UID: "743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.822140 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.822185 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6ztx\" (UniqueName: \"kubernetes.io/projected/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-kube-api-access-n6ztx\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:49 crc kubenswrapper[4804]: I0128 11:37:49.822200 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.123974 4804 generic.go:334] "Generic (PLEG): container finished" podID="490a3033-f3bb-4a92-a03e-03ada6af8280" containerID="eb4e45d9dabea51515670fa9925b8bbeff67a087ae29f57c10d07517c484e3bc" exitCode=0 Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.124064 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" event={"ID":"490a3033-f3bb-4a92-a03e-03ada6af8280","Type":"ContainerDied","Data":"eb4e45d9dabea51515670fa9925b8bbeff67a087ae29f57c10d07517c484e3bc"} Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.127771 4804 generic.go:334] "Generic (PLEG): container finished" podID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" containerID="18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d" exitCode=0 Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.127819 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-gpglg" Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.127828 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpglg" event={"ID":"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b","Type":"ContainerDied","Data":"18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d"} Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.127874 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-gpglg" event={"ID":"743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b","Type":"ContainerDied","Data":"47b0623a468cf75667fac9efd23f7ec477ae60e09a8412eba6f15ac6094df6ff"} Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.127935 4804 scope.go:117] "RemoveContainer" containerID="18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d" Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.160145 4804 scope.go:117] "RemoveContainer" containerID="f6b0c30033f4def0c227b7abbe1ff73ad9a58f818595ce03f2bad78c5406c646" Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.163061 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-gpglg"] Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.167679 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-gpglg"] Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.190386 4804 scope.go:117] "RemoveContainer" containerID="7d45d749b8cdbe090818d3ed2ebc0c708d8d00c0d03efa227ce9e96a8fb52a41" Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.207685 4804 scope.go:117] "RemoveContainer" containerID="18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d" Jan 28 11:37:50 crc kubenswrapper[4804]: E0128 11:37:50.208159 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d\": container with ID starting with 18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d not found: ID does not exist" containerID="18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d" Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.208205 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d"} err="failed to get container status \"18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d\": rpc error: code = NotFound desc = could not find container \"18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d\": container with ID starting with 18802d643a868f63e0ac823b6217cd339d1b5344e49d33081136d9aa6e381a4d not found: ID does not exist" Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.208240 4804 scope.go:117] "RemoveContainer" containerID="f6b0c30033f4def0c227b7abbe1ff73ad9a58f818595ce03f2bad78c5406c646" Jan 28 11:37:50 crc kubenswrapper[4804]: E0128 11:37:50.208564 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6b0c30033f4def0c227b7abbe1ff73ad9a58f818595ce03f2bad78c5406c646\": container with ID starting with f6b0c30033f4def0c227b7abbe1ff73ad9a58f818595ce03f2bad78c5406c646 not found: ID does not exist" containerID="f6b0c30033f4def0c227b7abbe1ff73ad9a58f818595ce03f2bad78c5406c646" Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.208602 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b0c30033f4def0c227b7abbe1ff73ad9a58f818595ce03f2bad78c5406c646"} err="failed to get container status \"f6b0c30033f4def0c227b7abbe1ff73ad9a58f818595ce03f2bad78c5406c646\": rpc error: code = NotFound desc = could not find container \"f6b0c30033f4def0c227b7abbe1ff73ad9a58f818595ce03f2bad78c5406c646\": container with ID starting with f6b0c30033f4def0c227b7abbe1ff73ad9a58f818595ce03f2bad78c5406c646 not found: ID does not exist" Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.208623 4804 scope.go:117] "RemoveContainer" containerID="7d45d749b8cdbe090818d3ed2ebc0c708d8d00c0d03efa227ce9e96a8fb52a41" Jan 28 11:37:50 crc kubenswrapper[4804]: E0128 11:37:50.209079 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d45d749b8cdbe090818d3ed2ebc0c708d8d00c0d03efa227ce9e96a8fb52a41\": container with ID starting with 7d45d749b8cdbe090818d3ed2ebc0c708d8d00c0d03efa227ce9e96a8fb52a41 not found: ID does not exist" containerID="7d45d749b8cdbe090818d3ed2ebc0c708d8d00c0d03efa227ce9e96a8fb52a41" Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.209103 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d45d749b8cdbe090818d3ed2ebc0c708d8d00c0d03efa227ce9e96a8fb52a41"} err="failed to get container status \"7d45d749b8cdbe090818d3ed2ebc0c708d8d00c0d03efa227ce9e96a8fb52a41\": rpc error: code = NotFound desc = could not find container \"7d45d749b8cdbe090818d3ed2ebc0c708d8d00c0d03efa227ce9e96a8fb52a41\": container with ID starting with 7d45d749b8cdbe090818d3ed2ebc0c708d8d00c0d03efa227ce9e96a8fb52a41 not found: ID does not exist" Jan 28 11:37:50 crc kubenswrapper[4804]: I0128 11:37:50.922924 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" path="/var/lib/kubelet/pods/743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b/volumes" Jan 28 11:37:51 crc kubenswrapper[4804]: I0128 11:37:51.135993 4804 generic.go:334] "Generic (PLEG): container finished" podID="490a3033-f3bb-4a92-a03e-03ada6af8280" containerID="13fc07c706217dd316621b62486c27851a4bdc65246c2371fdaed612e1e6e287" exitCode=0 Jan 28 11:37:51 crc kubenswrapper[4804]: I0128 11:37:51.136104 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" event={"ID":"490a3033-f3bb-4a92-a03e-03ada6af8280","Type":"ContainerDied","Data":"13fc07c706217dd316621b62486c27851a4bdc65246c2371fdaed612e1e6e287"} Jan 28 11:37:51 crc kubenswrapper[4804]: I0128 11:37:51.240821 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:51 crc kubenswrapper[4804]: I0128 11:37:51.240948 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:51 crc kubenswrapper[4804]: I0128 11:37:51.282716 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:52 crc kubenswrapper[4804]: I0128 11:37:52.330605 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:52 crc kubenswrapper[4804]: I0128 11:37:52.447823 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:52 crc kubenswrapper[4804]: I0128 11:37:52.554602 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h9sm\" (UniqueName: \"kubernetes.io/projected/490a3033-f3bb-4a92-a03e-03ada6af8280-kube-api-access-2h9sm\") pod \"490a3033-f3bb-4a92-a03e-03ada6af8280\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " Jan 28 11:37:52 crc kubenswrapper[4804]: I0128 11:37:52.554684 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-util\") pod \"490a3033-f3bb-4a92-a03e-03ada6af8280\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " Jan 28 11:37:52 crc kubenswrapper[4804]: I0128 11:37:52.554751 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-bundle\") pod \"490a3033-f3bb-4a92-a03e-03ada6af8280\" (UID: \"490a3033-f3bb-4a92-a03e-03ada6af8280\") " Jan 28 11:37:52 crc kubenswrapper[4804]: I0128 11:37:52.555809 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-bundle" (OuterVolumeSpecName: "bundle") pod "490a3033-f3bb-4a92-a03e-03ada6af8280" (UID: "490a3033-f3bb-4a92-a03e-03ada6af8280"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:37:52 crc kubenswrapper[4804]: I0128 11:37:52.562033 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/490a3033-f3bb-4a92-a03e-03ada6af8280-kube-api-access-2h9sm" (OuterVolumeSpecName: "kube-api-access-2h9sm") pod "490a3033-f3bb-4a92-a03e-03ada6af8280" (UID: "490a3033-f3bb-4a92-a03e-03ada6af8280"). InnerVolumeSpecName "kube-api-access-2h9sm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:37:52 crc kubenswrapper[4804]: I0128 11:37:52.656581 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h9sm\" (UniqueName: \"kubernetes.io/projected/490a3033-f3bb-4a92-a03e-03ada6af8280-kube-api-access-2h9sm\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:52 crc kubenswrapper[4804]: I0128 11:37:52.656620 4804 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:52 crc kubenswrapper[4804]: I0128 11:37:52.996920 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-util" (OuterVolumeSpecName: "util") pod "490a3033-f3bb-4a92-a03e-03ada6af8280" (UID: "490a3033-f3bb-4a92-a03e-03ada6af8280"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:37:53 crc kubenswrapper[4804]: I0128 11:37:53.061672 4804 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/490a3033-f3bb-4a92-a03e-03ada6af8280-util\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:53 crc kubenswrapper[4804]: I0128 11:37:53.153288 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" event={"ID":"490a3033-f3bb-4a92-a03e-03ada6af8280","Type":"ContainerDied","Data":"d0e01f91c5e24cd368a227381f9fc701e5a7edd94e1668950f51adc4749fca17"} Jan 28 11:37:53 crc kubenswrapper[4804]: I0128 11:37:53.153328 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0e01f91c5e24cd368a227381f9fc701e5a7edd94e1668950f51adc4749fca17" Jan 28 11:37:53 crc kubenswrapper[4804]: I0128 11:37:53.153525 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s" Jan 28 11:37:55 crc kubenswrapper[4804]: I0128 11:37:55.691319 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vx6td"] Jan 28 11:37:55 crc kubenswrapper[4804]: I0128 11:37:55.691902 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vx6td" podUID="a97c4398-9f91-4756-998e-ffd494da9163" containerName="registry-server" containerID="cri-o://d2366c6c1c4339c2d0fe014cc4f4449d3a59c88fa9951a02f3ae19c398b19677" gracePeriod=2 Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.172452 4804 generic.go:334] "Generic (PLEG): container finished" podID="a97c4398-9f91-4756-998e-ffd494da9163" containerID="d2366c6c1c4339c2d0fe014cc4f4449d3a59c88fa9951a02f3ae19c398b19677" exitCode=0 Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.172515 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx6td" event={"ID":"a97c4398-9f91-4756-998e-ffd494da9163","Type":"ContainerDied","Data":"d2366c6c1c4339c2d0fe014cc4f4449d3a59c88fa9951a02f3ae19c398b19677"} Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.604588 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.709430 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-utilities\") pod \"a97c4398-9f91-4756-998e-ffd494da9163\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.709493 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-catalog-content\") pod \"a97c4398-9f91-4756-998e-ffd494da9163\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.709590 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2gnj9\" (UniqueName: \"kubernetes.io/projected/a97c4398-9f91-4756-998e-ffd494da9163-kube-api-access-2gnj9\") pod \"a97c4398-9f91-4756-998e-ffd494da9163\" (UID: \"a97c4398-9f91-4756-998e-ffd494da9163\") " Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.710468 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-utilities" (OuterVolumeSpecName: "utilities") pod "a97c4398-9f91-4756-998e-ffd494da9163" (UID: "a97c4398-9f91-4756-998e-ffd494da9163"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.716137 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a97c4398-9f91-4756-998e-ffd494da9163-kube-api-access-2gnj9" (OuterVolumeSpecName: "kube-api-access-2gnj9") pod "a97c4398-9f91-4756-998e-ffd494da9163" (UID: "a97c4398-9f91-4756-998e-ffd494da9163"). InnerVolumeSpecName "kube-api-access-2gnj9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.758120 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a97c4398-9f91-4756-998e-ffd494da9163" (UID: "a97c4398-9f91-4756-998e-ffd494da9163"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.810761 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2gnj9\" (UniqueName: \"kubernetes.io/projected/a97c4398-9f91-4756-998e-ffd494da9163-kube-api-access-2gnj9\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.810816 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:56 crc kubenswrapper[4804]: I0128 11:37:56.810829 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a97c4398-9f91-4756-998e-ffd494da9163-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:37:57 crc kubenswrapper[4804]: I0128 11:37:57.180568 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vx6td" event={"ID":"a97c4398-9f91-4756-998e-ffd494da9163","Type":"ContainerDied","Data":"e7c74f09e0faa8b24cdcfbf5befa9f8e319aac43fb5b70dd37852eec57f84da2"} Jan 28 11:37:57 crc kubenswrapper[4804]: I0128 11:37:57.180915 4804 scope.go:117] "RemoveContainer" containerID="d2366c6c1c4339c2d0fe014cc4f4449d3a59c88fa9951a02f3ae19c398b19677" Jan 28 11:37:57 crc kubenswrapper[4804]: I0128 11:37:57.180664 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vx6td" Jan 28 11:37:57 crc kubenswrapper[4804]: I0128 11:37:57.204681 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vx6td"] Jan 28 11:37:57 crc kubenswrapper[4804]: I0128 11:37:57.207506 4804 scope.go:117] "RemoveContainer" containerID="bb0a30930c53cbd838eba75e98440b18d18245af3b3e1d63a9c2fb93b9f87213" Jan 28 11:37:57 crc kubenswrapper[4804]: I0128 11:37:57.209584 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vx6td"] Jan 28 11:37:57 crc kubenswrapper[4804]: I0128 11:37:57.225774 4804 scope.go:117] "RemoveContainer" containerID="d24bdf38c2ae0d9aa7afcd4df2208cd063d6380b57d496ed6102a48ceb575f6d" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058215 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9"] Jan 28 11:37:58 crc kubenswrapper[4804]: E0128 11:37:58.058479 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" containerName="extract-content" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058492 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" containerName="extract-content" Jan 28 11:37:58 crc kubenswrapper[4804]: E0128 11:37:58.058508 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97c4398-9f91-4756-998e-ffd494da9163" containerName="registry-server" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058513 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97c4398-9f91-4756-998e-ffd494da9163" containerName="registry-server" Jan 28 11:37:58 crc kubenswrapper[4804]: E0128 11:37:58.058523 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97c4398-9f91-4756-998e-ffd494da9163" containerName="extract-utilities" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058531 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97c4398-9f91-4756-998e-ffd494da9163" containerName="extract-utilities" Jan 28 11:37:58 crc kubenswrapper[4804]: E0128 11:37:58.058542 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="490a3033-f3bb-4a92-a03e-03ada6af8280" containerName="pull" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058548 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="490a3033-f3bb-4a92-a03e-03ada6af8280" containerName="pull" Jan 28 11:37:58 crc kubenswrapper[4804]: E0128 11:37:58.058559 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a97c4398-9f91-4756-998e-ffd494da9163" containerName="extract-content" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058565 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="a97c4398-9f91-4756-998e-ffd494da9163" containerName="extract-content" Jan 28 11:37:58 crc kubenswrapper[4804]: E0128 11:37:58.058571 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="490a3033-f3bb-4a92-a03e-03ada6af8280" containerName="extract" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058577 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="490a3033-f3bb-4a92-a03e-03ada6af8280" containerName="extract" Jan 28 11:37:58 crc kubenswrapper[4804]: E0128 11:37:58.058584 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" containerName="registry-server" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058589 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" containerName="registry-server" Jan 28 11:37:58 crc kubenswrapper[4804]: E0128 11:37:58.058597 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="490a3033-f3bb-4a92-a03e-03ada6af8280" containerName="util" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058603 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="490a3033-f3bb-4a92-a03e-03ada6af8280" containerName="util" Jan 28 11:37:58 crc kubenswrapper[4804]: E0128 11:37:58.058614 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" containerName="extract-utilities" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058619 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" containerName="extract-utilities" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058719 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="743d1389-d1bf-4a3d-9dd2-c5e5cbb2373b" containerName="registry-server" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058734 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="a97c4398-9f91-4756-998e-ffd494da9163" containerName="registry-server" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.058746 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="490a3033-f3bb-4a92-a03e-03ada6af8280" containerName="extract" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.059127 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.062092 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-init-dockercfg-cn2xq" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.136587 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9"] Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.227708 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bzqz\" (UniqueName: \"kubernetes.io/projected/134135c7-1032-47aa-b0bd-361463826caf-kube-api-access-9bzqz\") pod \"openstack-operator-controller-init-cdb5b4f99-hxlm9\" (UID: \"134135c7-1032-47aa-b0bd-361463826caf\") " pod="openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.329485 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bzqz\" (UniqueName: \"kubernetes.io/projected/134135c7-1032-47aa-b0bd-361463826caf-kube-api-access-9bzqz\") pod \"openstack-operator-controller-init-cdb5b4f99-hxlm9\" (UID: \"134135c7-1032-47aa-b0bd-361463826caf\") " pod="openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.351294 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bzqz\" (UniqueName: \"kubernetes.io/projected/134135c7-1032-47aa-b0bd-361463826caf-kube-api-access-9bzqz\") pod \"openstack-operator-controller-init-cdb5b4f99-hxlm9\" (UID: \"134135c7-1032-47aa-b0bd-361463826caf\") " pod="openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.375362 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9" Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.588076 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9"] Jan 28 11:37:58 crc kubenswrapper[4804]: W0128 11:37:58.592055 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod134135c7_1032_47aa_b0bd_361463826caf.slice/crio-05702102c283dc28b92c8e163b243830fe446e15cbd497515e054b71706be398 WatchSource:0}: Error finding container 05702102c283dc28b92c8e163b243830fe446e15cbd497515e054b71706be398: Status 404 returned error can't find the container with id 05702102c283dc28b92c8e163b243830fe446e15cbd497515e054b71706be398 Jan 28 11:37:58 crc kubenswrapper[4804]: I0128 11:37:58.925084 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a97c4398-9f91-4756-998e-ffd494da9163" path="/var/lib/kubelet/pods/a97c4398-9f91-4756-998e-ffd494da9163/volumes" Jan 28 11:37:59 crc kubenswrapper[4804]: I0128 11:37:59.194906 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9" event={"ID":"134135c7-1032-47aa-b0bd-361463826caf","Type":"ContainerStarted","Data":"05702102c283dc28b92c8e163b243830fe446e15cbd497515e054b71706be398"} Jan 28 11:38:04 crc kubenswrapper[4804]: I0128 11:38:04.236422 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9" event={"ID":"134135c7-1032-47aa-b0bd-361463826caf","Type":"ContainerStarted","Data":"b1f85cb71fe4fbe86eaf50c3a44e67549139f598b8f0c430ab34a6812c0a577f"} Jan 28 11:38:04 crc kubenswrapper[4804]: I0128 11:38:04.236971 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9" Jan 28 11:38:04 crc kubenswrapper[4804]: I0128 11:38:04.263820 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9" podStartSLOduration=1.194391686 podStartE2EDuration="6.263803493s" podCreationTimestamp="2026-01-28 11:37:58 +0000 UTC" firstStartedPulling="2026-01-28 11:37:58.594237927 +0000 UTC m=+954.389117911" lastFinishedPulling="2026-01-28 11:38:03.663649734 +0000 UTC m=+959.458529718" observedRunningTime="2026-01-28 11:38:04.260117964 +0000 UTC m=+960.054997948" watchObservedRunningTime="2026-01-28 11:38:04.263803493 +0000 UTC m=+960.058683477" Jan 28 11:38:08 crc kubenswrapper[4804]: I0128 11:38:08.378929 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-init-cdb5b4f99-hxlm9" Jan 28 11:38:12 crc kubenswrapper[4804]: I0128 11:38:12.582624 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:38:12 crc kubenswrapper[4804]: I0128 11:38:12.582696 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:38:12 crc kubenswrapper[4804]: I0128 11:38:12.582743 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:38:12 crc kubenswrapper[4804]: I0128 11:38:12.583362 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e2d1117c737baf6cd27ef1229c3435bfc59febfb941c2b84b434e736df46abc8"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 11:38:12 crc kubenswrapper[4804]: I0128 11:38:12.583415 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://e2d1117c737baf6cd27ef1229c3435bfc59febfb941c2b84b434e736df46abc8" gracePeriod=600 Jan 28 11:38:13 crc kubenswrapper[4804]: I0128 11:38:13.294202 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="e2d1117c737baf6cd27ef1229c3435bfc59febfb941c2b84b434e736df46abc8" exitCode=0 Jan 28 11:38:13 crc kubenswrapper[4804]: I0128 11:38:13.294329 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"e2d1117c737baf6cd27ef1229c3435bfc59febfb941c2b84b434e736df46abc8"} Jan 28 11:38:13 crc kubenswrapper[4804]: I0128 11:38:13.294588 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"ed6af6b086af0e36078ceaad545a02650a81d6b24e2afd021938bf20fba0d1ad"} Jan 28 11:38:13 crc kubenswrapper[4804]: I0128 11:38:13.294619 4804 scope.go:117] "RemoveContainer" containerID="493f3a58ce9c84e61c12f35c4be8ff28af2862f186b7fdf44e3a4a848a20107b" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.351238 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.352603 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.355533 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-wzwqx" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.356085 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.357020 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.359910 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-tg24f" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.378462 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.379816 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.387471 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-5kbq5" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.396639 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.405151 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.416347 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.419939 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.426187 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-j2vj6" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.436057 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mchf8\" (UniqueName: \"kubernetes.io/projected/b14a4da9-54a6-4a7c-bd0d-3cf9cd05d048-kube-api-access-mchf8\") pod \"designate-operator-controller-manager-6d9697b7f4-fbggh\" (UID: \"b14a4da9-54a6-4a7c-bd0d-3cf9cd05d048\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.436146 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq5xn\" (UniqueName: \"kubernetes.io/projected/c36b33fc-3ff6-4c44-a079-bc48a5a3d509-kube-api-access-mq5xn\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-vjb6d\" (UID: \"c36b33fc-3ff6-4c44-a079-bc48a5a3d509\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.436304 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjkt7\" (UniqueName: \"kubernetes.io/projected/db8796b2-e360-4287-9ba2-4ceda6de770e-kube-api-access-tjkt7\") pod \"cinder-operator-controller-manager-8d874c8fc-j5j86\" (UID: \"db8796b2-e360-4287-9ba2-4ceda6de770e\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.454129 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.468030 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.468969 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.473544 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-d8n9n" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.481925 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.505730 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.518172 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.521346 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.524659 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-nhrg8" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.529689 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.538336 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjkt7\" (UniqueName: \"kubernetes.io/projected/db8796b2-e360-4287-9ba2-4ceda6de770e-kube-api-access-tjkt7\") pod \"cinder-operator-controller-manager-8d874c8fc-j5j86\" (UID: \"db8796b2-e360-4287-9ba2-4ceda6de770e\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.538421 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mchf8\" (UniqueName: \"kubernetes.io/projected/b14a4da9-54a6-4a7c-bd0d-3cf9cd05d048-kube-api-access-mchf8\") pod \"designate-operator-controller-manager-6d9697b7f4-fbggh\" (UID: \"b14a4da9-54a6-4a7c-bd0d-3cf9cd05d048\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.538468 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4vvw\" (UniqueName: \"kubernetes.io/projected/acdcc5e8-c284-444e-86c2-96aec766b35b-kube-api-access-l4vvw\") pod \"heat-operator-controller-manager-69d6db494d-hxv8b\" (UID: \"acdcc5e8-c284-444e-86c2-96aec766b35b\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.538491 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6xpm\" (UniqueName: \"kubernetes.io/projected/186e63a0-88e6-404b-963c-e5cb22485277-kube-api-access-f6xpm\") pod \"glance-operator-controller-manager-8886f4c47-qz2dl\" (UID: \"186e63a0-88e6-404b-963c-e5cb22485277\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.538515 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mq5xn\" (UniqueName: \"kubernetes.io/projected/c36b33fc-3ff6-4c44-a079-bc48a5a3d509-kube-api-access-mq5xn\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-vjb6d\" (UID: \"c36b33fc-3ff6-4c44-a079-bc48a5a3d509\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.540589 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.541537 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.551771 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.552234 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7v747" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.559462 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.562018 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.563297 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.569479 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-mdsbd" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.583010 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjkt7\" (UniqueName: \"kubernetes.io/projected/db8796b2-e360-4287-9ba2-4ceda6de770e-kube-api-access-tjkt7\") pod \"cinder-operator-controller-manager-8d874c8fc-j5j86\" (UID: \"db8796b2-e360-4287-9ba2-4ceda6de770e\") " pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.583010 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mchf8\" (UniqueName: \"kubernetes.io/projected/b14a4da9-54a6-4a7c-bd0d-3cf9cd05d048-kube-api-access-mchf8\") pod \"designate-operator-controller-manager-6d9697b7f4-fbggh\" (UID: \"b14a4da9-54a6-4a7c-bd0d-3cf9cd05d048\") " pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.583034 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.584489 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.587608 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mq5xn\" (UniqueName: \"kubernetes.io/projected/c36b33fc-3ff6-4c44-a079-bc48a5a3d509-kube-api-access-mq5xn\") pod \"barbican-operator-controller-manager-7b6c4d8c5f-vjb6d\" (UID: \"c36b33fc-3ff6-4c44-a079-bc48a5a3d509\") " pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.589160 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-rn76n" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.616222 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.652143 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.655090 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rzjn\" (UniqueName: \"kubernetes.io/projected/f75f08ff-7d3c-4fb4-a366-1c996771a71d-kube-api-access-2rzjn\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.655142 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5mzv\" (UniqueName: \"kubernetes.io/projected/e770ba97-59e1-4752-8e93-bc7d53ff7c04-kube-api-access-d5mzv\") pod \"ironic-operator-controller-manager-5f4b8bd54d-k6rzx\" (UID: \"e770ba97-59e1-4752-8e93-bc7d53ff7c04\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.655174 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-94hwg\" (UniqueName: \"kubernetes.io/projected/ba3d9f70-1d55-4ca1-a36f-19047f0a9a6d-kube-api-access-94hwg\") pod \"horizon-operator-controller-manager-5fb775575f-fw9dq\" (UID: \"ba3d9f70-1d55-4ca1-a36f-19047f0a9a6d\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.655207 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4vvw\" (UniqueName: \"kubernetes.io/projected/acdcc5e8-c284-444e-86c2-96aec766b35b-kube-api-access-l4vvw\") pod \"heat-operator-controller-manager-69d6db494d-hxv8b\" (UID: \"acdcc5e8-c284-444e-86c2-96aec766b35b\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.655232 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6xpm\" (UniqueName: \"kubernetes.io/projected/186e63a0-88e6-404b-963c-e5cb22485277-kube-api-access-f6xpm\") pod \"glance-operator-controller-manager-8886f4c47-qz2dl\" (UID: \"186e63a0-88e6-404b-963c-e5cb22485277\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.655256 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.655314 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pkd6m\" (UniqueName: \"kubernetes.io/projected/d5ce0c1e-3061-46ed-a816-3839144b160a-kube-api-access-pkd6m\") pod \"keystone-operator-controller-manager-84f48565d4-s92b7\" (UID: \"d5ce0c1e-3061-46ed-a816-3839144b160a\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.663305 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.667069 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.671439 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-rcf4h" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.694613 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.727757 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.730332 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.733530 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.738852 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.740426 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.747347 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6xpm\" (UniqueName: \"kubernetes.io/projected/186e63a0-88e6-404b-963c-e5cb22485277-kube-api-access-f6xpm\") pod \"glance-operator-controller-manager-8886f4c47-qz2dl\" (UID: \"186e63a0-88e6-404b-963c-e5cb22485277\") " pod="openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.748192 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-nvv4g" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.749396 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4vvw\" (UniqueName: \"kubernetes.io/projected/acdcc5e8-c284-444e-86c2-96aec766b35b-kube-api-access-l4vvw\") pod \"heat-operator-controller-manager-69d6db494d-hxv8b\" (UID: \"acdcc5e8-c284-444e-86c2-96aec766b35b\") " pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.755381 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.756245 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rzjn\" (UniqueName: \"kubernetes.io/projected/f75f08ff-7d3c-4fb4-a366-1c996771a71d-kube-api-access-2rzjn\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.756273 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d5mzv\" (UniqueName: \"kubernetes.io/projected/e770ba97-59e1-4752-8e93-bc7d53ff7c04-kube-api-access-d5mzv\") pod \"ironic-operator-controller-manager-5f4b8bd54d-k6rzx\" (UID: \"e770ba97-59e1-4752-8e93-bc7d53ff7c04\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.756301 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-94hwg\" (UniqueName: \"kubernetes.io/projected/ba3d9f70-1d55-4ca1-a36f-19047f0a9a6d-kube-api-access-94hwg\") pod \"horizon-operator-controller-manager-5fb775575f-fw9dq\" (UID: \"ba3d9f70-1d55-4ca1-a36f-19047f0a9a6d\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.756327 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.756368 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jwst\" (UniqueName: \"kubernetes.io/projected/ec1046a1-b834-40e4-b82a-923885428171-kube-api-access-7jwst\") pod \"manila-operator-controller-manager-7dd968899f-wl5w5\" (UID: \"ec1046a1-b834-40e4-b82a-923885428171\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.756401 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pkd6m\" (UniqueName: \"kubernetes.io/projected/d5ce0c1e-3061-46ed-a816-3839144b160a-kube-api-access-pkd6m\") pod \"keystone-operator-controller-manager-84f48565d4-s92b7\" (UID: \"d5ce0c1e-3061-46ed-a816-3839144b160a\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7" Jan 28 11:38:34 crc kubenswrapper[4804]: E0128 11:38:34.757906 4804 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:34 crc kubenswrapper[4804]: E0128 11:38:34.757975 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert podName:f75f08ff-7d3c-4fb4-a366-1c996771a71d nodeName:}" failed. No retries permitted until 2026-01-28 11:38:35.257959324 +0000 UTC m=+991.052839308 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert") pod "infra-operator-controller-manager-79955696d6-wb5k2" (UID: "f75f08ff-7d3c-4fb4-a366-1c996771a71d") : secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.758532 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.776926 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rzjn\" (UniqueName: \"kubernetes.io/projected/f75f08ff-7d3c-4fb4-a366-1c996771a71d-kube-api-access-2rzjn\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.780820 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d5mzv\" (UniqueName: \"kubernetes.io/projected/e770ba97-59e1-4752-8e93-bc7d53ff7c04-kube-api-access-d5mzv\") pod \"ironic-operator-controller-manager-5f4b8bd54d-k6rzx\" (UID: \"e770ba97-59e1-4752-8e93-bc7d53ff7c04\") " pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.786528 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pkd6m\" (UniqueName: \"kubernetes.io/projected/d5ce0c1e-3061-46ed-a816-3839144b160a-kube-api-access-pkd6m\") pod \"keystone-operator-controller-manager-84f48565d4-s92b7\" (UID: \"d5ce0c1e-3061-46ed-a816-3839144b160a\") " pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.789409 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.790416 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.793001 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-wv4st" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.794954 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-94hwg\" (UniqueName: \"kubernetes.io/projected/ba3d9f70-1d55-4ca1-a36f-19047f0a9a6d-kube-api-access-94hwg\") pod \"horizon-operator-controller-manager-5fb775575f-fw9dq\" (UID: \"ba3d9f70-1d55-4ca1-a36f-19047f0a9a6d\") " pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.797226 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.806443 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.807668 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.809748 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-qmnkx" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.824814 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.840168 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.850006 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.851307 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.852697 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.854570 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-z4l97" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.856929 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.857514 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jwst\" (UniqueName: \"kubernetes.io/projected/ec1046a1-b834-40e4-b82a-923885428171-kube-api-access-7jwst\") pod \"manila-operator-controller-manager-7dd968899f-wl5w5\" (UID: \"ec1046a1-b834-40e4-b82a-923885428171\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.857565 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7xrx\" (UniqueName: \"kubernetes.io/projected/07990c6c-3350-45a8-85de-1e0db97acb07-kube-api-access-g7xrx\") pod \"mariadb-operator-controller-manager-67bf948998-7dg9l\" (UID: \"07990c6c-3350-45a8-85de-1e0db97acb07\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.857621 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h9rff\" (UniqueName: \"kubernetes.io/projected/b79b961c-583d-4e78-8513-c44ed292c325-kube-api-access-h9rff\") pod \"neutron-operator-controller-manager-585dbc889-n9kpn\" (UID: \"b79b961c-583d-4e78-8513-c44ed292c325\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.857653 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sv2mr\" (UniqueName: \"kubernetes.io/projected/8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1-kube-api-access-sv2mr\") pod \"nova-operator-controller-manager-55bff696bd-dndv5\" (UID: \"8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.857807 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.898118 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-h57zg" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.906848 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jwst\" (UniqueName: \"kubernetes.io/projected/ec1046a1-b834-40e4-b82a-923885428171-kube-api-access-7jwst\") pod \"manila-operator-controller-manager-7dd968899f-wl5w5\" (UID: \"ec1046a1-b834-40e4-b82a-923885428171\") " pod="openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.912344 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.944479 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.947207 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.953192 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.958530 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-m8hd9" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.960077 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h9rff\" (UniqueName: \"kubernetes.io/projected/b79b961c-583d-4e78-8513-c44ed292c325-kube-api-access-h9rff\") pod \"neutron-operator-controller-manager-585dbc889-n9kpn\" (UID: \"b79b961c-583d-4e78-8513-c44ed292c325\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.960118 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sv2mr\" (UniqueName: \"kubernetes.io/projected/8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1-kube-api-access-sv2mr\") pod \"nova-operator-controller-manager-55bff696bd-dndv5\" (UID: \"8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.960167 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-229rn\" (UniqueName: \"kubernetes.io/projected/7ab2436a-1b54-4c5e-bdc1-959026660c98-kube-api-access-229rn\") pod \"ovn-operator-controller-manager-788c46999f-4cpk5\" (UID: \"7ab2436a-1b54-4c5e-bdc1-959026660c98\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.960201 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7wvn\" (UniqueName: \"kubernetes.io/projected/8f1a2428-c6c8-4113-9654-0c58ab91b45b-kube-api-access-f7wvn\") pod \"octavia-operator-controller-manager-6687f8d877-m5xng\" (UID: \"8f1a2428-c6c8-4113-9654-0c58ab91b45b\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.960314 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g7xrx\" (UniqueName: \"kubernetes.io/projected/07990c6c-3350-45a8-85de-1e0db97acb07-kube-api-access-g7xrx\") pod \"mariadb-operator-controller-manager-67bf948998-7dg9l\" (UID: \"07990c6c-3350-45a8-85de-1e0db97acb07\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.964508 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.976812 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.977599 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.980631 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5"] Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.980740 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.982970 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g7xrx\" (UniqueName: \"kubernetes.io/projected/07990c6c-3350-45a8-85de-1e0db97acb07-kube-api-access-g7xrx\") pod \"mariadb-operator-controller-manager-67bf948998-7dg9l\" (UID: \"07990c6c-3350-45a8-85de-1e0db97acb07\") " pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.997660 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-7vnf2" Jan 28 11:38:34 crc kubenswrapper[4804]: I0128 11:38:34.998835 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h9rff\" (UniqueName: \"kubernetes.io/projected/b79b961c-583d-4e78-8513-c44ed292c325-kube-api-access-h9rff\") pod \"neutron-operator-controller-manager-585dbc889-n9kpn\" (UID: \"b79b961c-583d-4e78-8513-c44ed292c325\") " pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.020054 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.043620 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sv2mr\" (UniqueName: \"kubernetes.io/projected/8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1-kube-api-access-sv2mr\") pod \"nova-operator-controller-manager-55bff696bd-dndv5\" (UID: \"8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1\") " pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.048201 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.057387 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.065900 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4hr5\" (UniqueName: \"kubernetes.io/projected/a26075bd-4d23-463a-abe8-575a02ebc9ad-kube-api-access-n4hr5\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.067836 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.067960 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w9dpb\" (UniqueName: \"kubernetes.io/projected/deece2f8-8c1c-4599-80f4-44e6ec055a18-kube-api-access-w9dpb\") pod \"placement-operator-controller-manager-5b964cf4cd-bfl45\" (UID: \"deece2f8-8c1c-4599-80f4-44e6ec055a18\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.068039 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-229rn\" (UniqueName: \"kubernetes.io/projected/7ab2436a-1b54-4c5e-bdc1-959026660c98-kube-api-access-229rn\") pod \"ovn-operator-controller-manager-788c46999f-4cpk5\" (UID: \"7ab2436a-1b54-4c5e-bdc1-959026660c98\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.068086 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7wvn\" (UniqueName: \"kubernetes.io/projected/8f1a2428-c6c8-4113-9654-0c58ab91b45b-kube-api-access-f7wvn\") pod \"octavia-operator-controller-manager-6687f8d877-m5xng\" (UID: \"8f1a2428-c6c8-4113-9654-0c58ab91b45b\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.074690 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-c9zbm" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.093907 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.119273 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.137704 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7wvn\" (UniqueName: \"kubernetes.io/projected/8f1a2428-c6c8-4113-9654-0c58ab91b45b-kube-api-access-f7wvn\") pod \"octavia-operator-controller-manager-6687f8d877-m5xng\" (UID: \"8f1a2428-c6c8-4113-9654-0c58ab91b45b\") " pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.138139 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.147170 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.151600 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-229rn\" (UniqueName: \"kubernetes.io/projected/7ab2436a-1b54-4c5e-bdc1-959026660c98-kube-api-access-229rn\") pod \"ovn-operator-controller-manager-788c46999f-4cpk5\" (UID: \"7ab2436a-1b54-4c5e-bdc1-959026660c98\") " pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.163168 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.164478 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.166784 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.170272 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-xrqjg" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.171195 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4hr5\" (UniqueName: \"kubernetes.io/projected/a26075bd-4d23-463a-abe8-575a02ebc9ad-kube-api-access-n4hr5\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.171225 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.171253 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w9dpb\" (UniqueName: \"kubernetes.io/projected/deece2f8-8c1c-4599-80f4-44e6ec055a18-kube-api-access-w9dpb\") pod \"placement-operator-controller-manager-5b964cf4cd-bfl45\" (UID: \"deece2f8-8c1c-4599-80f4-44e6ec055a18\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.171316 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9dcn\" (UniqueName: \"kubernetes.io/projected/eb1c01a9-6548-49cd-8e1f-4f01daaff754-kube-api-access-n9dcn\") pod \"swift-operator-controller-manager-68fc8c869-fwd68\" (UID: \"eb1c01a9-6548-49cd-8e1f-4f01daaff754\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" Jan 28 11:38:35 crc kubenswrapper[4804]: E0128 11:38:35.171633 4804 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:35 crc kubenswrapper[4804]: E0128 11:38:35.171676 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert podName:a26075bd-4d23-463a-abe8-575a02ebc9ad nodeName:}" failed. No retries permitted until 2026-01-28 11:38:35.671658962 +0000 UTC m=+991.466538946 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" (UID: "a26075bd-4d23-463a-abe8-575a02ebc9ad") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.193802 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.221439 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.223936 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4hr5\" (UniqueName: \"kubernetes.io/projected/a26075bd-4d23-463a-abe8-575a02ebc9ad-kube-api-access-n4hr5\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.241315 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w9dpb\" (UniqueName: \"kubernetes.io/projected/deece2f8-8c1c-4599-80f4-44e6ec055a18-kube-api-access-w9dpb\") pod \"placement-operator-controller-manager-5b964cf4cd-bfl45\" (UID: \"deece2f8-8c1c-4599-80f4-44e6ec055a18\") " pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.280954 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.281929 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2rnj\" (UniqueName: \"kubernetes.io/projected/23a10136-5079-4838-adf9-6512ccfd5f2c-kube-api-access-m2rnj\") pod \"telemetry-operator-controller-manager-64b5b76f97-2hdgj\" (UID: \"23a10136-5079-4838-adf9-6512ccfd5f2c\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.281976 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9dcn\" (UniqueName: \"kubernetes.io/projected/eb1c01a9-6548-49cd-8e1f-4f01daaff754-kube-api-access-n9dcn\") pod \"swift-operator-controller-manager-68fc8c869-fwd68\" (UID: \"eb1c01a9-6548-49cd-8e1f-4f01daaff754\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.282006 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:38:35 crc kubenswrapper[4804]: E0128 11:38:35.282112 4804 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:35 crc kubenswrapper[4804]: E0128 11:38:35.282155 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert podName:f75f08ff-7d3c-4fb4-a366-1c996771a71d nodeName:}" failed. No retries permitted until 2026-01-28 11:38:36.282140079 +0000 UTC m=+992.077020063 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert") pod "infra-operator-controller-manager-79955696d6-wb5k2" (UID: "f75f08ff-7d3c-4fb4-a366-1c996771a71d") : secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.289242 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.327961 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.332785 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.357561 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.363785 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-wzzgl" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.389125 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2k6d\" (UniqueName: \"kubernetes.io/projected/ff35634f-2b61-44e4-934a-74b39c5b7335-kube-api-access-z2k6d\") pod \"test-operator-controller-manager-56f8bfcd9f-9vgvb\" (UID: \"ff35634f-2b61-44e4-934a-74b39c5b7335\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.389279 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m2rnj\" (UniqueName: \"kubernetes.io/projected/23a10136-5079-4838-adf9-6512ccfd5f2c-kube-api-access-m2rnj\") pod \"telemetry-operator-controller-manager-64b5b76f97-2hdgj\" (UID: \"23a10136-5079-4838-adf9-6512ccfd5f2c\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.402115 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.425063 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9dcn\" (UniqueName: \"kubernetes.io/projected/eb1c01a9-6548-49cd-8e1f-4f01daaff754-kube-api-access-n9dcn\") pod \"swift-operator-controller-manager-68fc8c869-fwd68\" (UID: \"eb1c01a9-6548-49cd-8e1f-4f01daaff754\") " pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.428098 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m2rnj\" (UniqueName: \"kubernetes.io/projected/23a10136-5079-4838-adf9-6512ccfd5f2c-kube-api-access-m2rnj\") pod \"telemetry-operator-controller-manager-64b5b76f97-2hdgj\" (UID: \"23a10136-5079-4838-adf9-6512ccfd5f2c\") " pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.445899 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-659wf"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.448158 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.450228 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-pxw8f" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.451362 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-659wf"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.472214 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.499643 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2k6d\" (UniqueName: \"kubernetes.io/projected/ff35634f-2b61-44e4-934a-74b39c5b7335-kube-api-access-z2k6d\") pod \"test-operator-controller-manager-56f8bfcd9f-9vgvb\" (UID: \"ff35634f-2b61-44e4-934a-74b39c5b7335\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.499776 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x74dd\" (UniqueName: \"kubernetes.io/projected/67fbb1e9-d718-4075-971a-33a245c498e3-kube-api-access-x74dd\") pod \"watcher-operator-controller-manager-564965969-659wf\" (UID: \"67fbb1e9-d718-4075-971a-33a245c498e3\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.514939 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.517084 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.521851 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"metrics-server-cert" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.522772 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.523293 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lnb4p" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.532359 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.547819 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2k6d\" (UniqueName: \"kubernetes.io/projected/ff35634f-2b61-44e4-934a-74b39c5b7335-kube-api-access-z2k6d\") pod \"test-operator-controller-manager-56f8bfcd9f-9vgvb\" (UID: \"ff35634f-2b61-44e4-934a-74b39c5b7335\") " pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.547906 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.548934 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.558488 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-784h5" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.568518 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.601485 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.601539 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xskb9\" (UniqueName: \"kubernetes.io/projected/69938639-9ff0-433c-bd73-8d129935e7d4-kube-api-access-xskb9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-cqlch\" (UID: \"69938639-9ff0-433c-bd73-8d129935e7d4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.601586 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x74dd\" (UniqueName: \"kubernetes.io/projected/67fbb1e9-d718-4075-971a-33a245c498e3-kube-api-access-x74dd\") pod \"watcher-operator-controller-manager-564965969-659wf\" (UID: \"67fbb1e9-d718-4075-971a-33a245c498e3\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.601618 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.601695 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48nzc\" (UniqueName: \"kubernetes.io/projected/58f748c2-ceb6-4d34-8a2e-8227e59ef560-kube-api-access-48nzc\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.623084 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x74dd\" (UniqueName: \"kubernetes.io/projected/67fbb1e9-d718-4075-971a-33a245c498e3-kube-api-access-x74dd\") pod \"watcher-operator-controller-manager-564965969-659wf\" (UID: \"67fbb1e9-d718-4075-971a-33a245c498e3\") " pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.627595 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.704328 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.704427 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48nzc\" (UniqueName: \"kubernetes.io/projected/58f748c2-ceb6-4d34-8a2e-8227e59ef560-kube-api-access-48nzc\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.704558 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.704593 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.704651 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xskb9\" (UniqueName: \"kubernetes.io/projected/69938639-9ff0-433c-bd73-8d129935e7d4-kube-api-access-xskb9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-cqlch\" (UID: \"69938639-9ff0-433c-bd73-8d129935e7d4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch" Jan 28 11:38:35 crc kubenswrapper[4804]: E0128 11:38:35.705196 4804 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:35 crc kubenswrapper[4804]: E0128 11:38:35.705246 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert podName:a26075bd-4d23-463a-abe8-575a02ebc9ad nodeName:}" failed. No retries permitted until 2026-01-28 11:38:36.705231927 +0000 UTC m=+992.500111911 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" (UID: "a26075bd-4d23-463a-abe8-575a02ebc9ad") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:35 crc kubenswrapper[4804]: E0128 11:38:35.705526 4804 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 11:38:35 crc kubenswrapper[4804]: E0128 11:38:35.705556 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:38:36.205544447 +0000 UTC m=+992.000424431 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "metrics-server-cert" not found Jan 28 11:38:35 crc kubenswrapper[4804]: E0128 11:38:35.706261 4804 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 11:38:35 crc kubenswrapper[4804]: E0128 11:38:35.706347 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:38:36.206324891 +0000 UTC m=+992.001204875 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "webhook-server-cert" not found Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.736944 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xskb9\" (UniqueName: \"kubernetes.io/projected/69938639-9ff0-433c-bd73-8d129935e7d4-kube-api-access-xskb9\") pod \"rabbitmq-cluster-operator-manager-668c99d594-cqlch\" (UID: \"69938639-9ff0-433c-bd73-8d129935e7d4\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.739264 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48nzc\" (UniqueName: \"kubernetes.io/projected/58f748c2-ceb6-4d34-8a2e-8227e59ef560-kube-api-access-48nzc\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.768002 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.772057 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.787393 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.862711 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.898680 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.946953 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.954921 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq"] Jan 28 11:38:35 crc kubenswrapper[4804]: I0128 11:38:35.988817 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx"] Jan 28 11:38:36 crc kubenswrapper[4804]: W0128 11:38:36.049437 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod186e63a0_88e6_404b_963c_e5cb22485277.slice/crio-9f377a0a09fb2c27c43540ac51e0bc4a180b1a7f90d7106f1a51eccecc44055f WatchSource:0}: Error finding container 9f377a0a09fb2c27c43540ac51e0bc4a180b1a7f90d7106f1a51eccecc44055f: Status 404 returned error can't find the container with id 9f377a0a09fb2c27c43540ac51e0bc4a180b1a7f90d7106f1a51eccecc44055f Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.220898 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.221304 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.221442 4804 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.221489 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:38:37.221474889 +0000 UTC m=+993.016354873 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "webhook-server-cert" not found Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.221529 4804 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.221547 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:38:37.221541641 +0000 UTC m=+993.016421625 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "metrics-server-cert" not found Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.322226 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.322416 4804 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.322493 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert podName:f75f08ff-7d3c-4fb4-a366-1c996771a71d nodeName:}" failed. No retries permitted until 2026-01-28 11:38:38.322474004 +0000 UTC m=+994.117353988 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert") pod "infra-operator-controller-manager-79955696d6-wb5k2" (UID: "f75f08ff-7d3c-4fb4-a366-1c996771a71d") : secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.357190 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7"] Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.361723 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh"] Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.386716 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b"] Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.425010 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5"] Jan 28 11:38:36 crc kubenswrapper[4804]: W0128 11:38:36.432049 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8c7ff5ff_8c23_46f4_9ba6_dda63fa9cce1.slice/crio-261bc620856907d9e6b5aa5a74d02d6679e8ce3788780205eec4e6669d509011 WatchSource:0}: Error finding container 261bc620856907d9e6b5aa5a74d02d6679e8ce3788780205eec4e6669d509011: Status 404 returned error can't find the container with id 261bc620856907d9e6b5aa5a74d02d6679e8ce3788780205eec4e6669d509011 Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.433338 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5"] Jan 28 11:38:36 crc kubenswrapper[4804]: W0128 11:38:36.434460 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f1a2428_c6c8_4113_9654_0c58ab91b45b.slice/crio-93091301d88d6e08bfd7e616a5576061ebf5786169b510dabb1eabd6baf63300 WatchSource:0}: Error finding container 93091301d88d6e08bfd7e616a5576061ebf5786169b510dabb1eabd6baf63300: Status 404 returned error can't find the container with id 93091301d88d6e08bfd7e616a5576061ebf5786169b510dabb1eabd6baf63300 Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.442553 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng"] Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.447988 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l"] Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.504496 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b" event={"ID":"acdcc5e8-c284-444e-86c2-96aec766b35b","Type":"ContainerStarted","Data":"4eafef24c53cb67d10543ff1aed77ed6e30fb1e8ae75e6602e351f4588afefaf"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.508629 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5" event={"ID":"ec1046a1-b834-40e4-b82a-923885428171","Type":"ContainerStarted","Data":"b53af7c7855cf739fadf4e6e2c6df12f485fca7792fb09d3178884d186293256"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.510038 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l" event={"ID":"07990c6c-3350-45a8-85de-1e0db97acb07","Type":"ContainerStarted","Data":"b8663b9dbf27c61ab81a3d421ac11cf9e2478b68ba8cfe7af7369e8526e5d63a"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.511260 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng" event={"ID":"8f1a2428-c6c8-4113-9654-0c58ab91b45b","Type":"ContainerStarted","Data":"93091301d88d6e08bfd7e616a5576061ebf5786169b510dabb1eabd6baf63300"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.512265 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86" event={"ID":"db8796b2-e360-4287-9ba2-4ceda6de770e","Type":"ContainerStarted","Data":"895a3f5511952bbf7089d97085752bf56272c5bf27b267fd5d34d12d5f3df970"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.513711 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx" event={"ID":"e770ba97-59e1-4752-8e93-bc7d53ff7c04","Type":"ContainerStarted","Data":"48982f5d0ff8a2b65899f3157144028f8be0420b71da1d2c1c5066be864990c4"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.514904 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl" event={"ID":"186e63a0-88e6-404b-963c-e5cb22485277","Type":"ContainerStarted","Data":"9f377a0a09fb2c27c43540ac51e0bc4a180b1a7f90d7106f1a51eccecc44055f"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.515848 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d" event={"ID":"c36b33fc-3ff6-4c44-a079-bc48a5a3d509","Type":"ContainerStarted","Data":"83f3ec579dfec43c7d39c2d9940410471f47310656fde55a514e880646f9de7a"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.516722 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq" event={"ID":"ba3d9f70-1d55-4ca1-a36f-19047f0a9a6d","Type":"ContainerStarted","Data":"8b937b2020729521500c8535098dfb65146900ac89c091e16ad7f17032b2e0ab"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.517840 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh" event={"ID":"b14a4da9-54a6-4a7c-bd0d-3cf9cd05d048","Type":"ContainerStarted","Data":"3483515f63ecf7c100b081e893ac4dcd41aaad186e12fb0d37fa0b574fa783f7"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.519166 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7" event={"ID":"d5ce0c1e-3061-46ed-a816-3839144b160a","Type":"ContainerStarted","Data":"8bb892969bb182a9eaf3e5a225dd66b12d6d7f19b92fc93377f5eaf54ba5460e"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.519988 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" event={"ID":"8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1","Type":"ContainerStarted","Data":"261bc620856907d9e6b5aa5a74d02d6679e8ce3788780205eec4e6669d509011"} Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.729780 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.730104 4804 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.730217 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert podName:a26075bd-4d23-463a-abe8-575a02ebc9ad nodeName:}" failed. No retries permitted until 2026-01-28 11:38:38.730186122 +0000 UTC m=+994.525066106 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" (UID: "a26075bd-4d23-463a-abe8-575a02ebc9ad") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.790795 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj"] Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.809918 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45"] Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.815992 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb"] Jan 28 11:38:36 crc kubenswrapper[4804]: W0128 11:38:36.823522 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podff35634f_2b61_44e4_934a_74b39c5b7335.slice/crio-99493dacdb37721eecccf5cfc1bd1bd74e8e4cfcee376e0c05b61cb7913672dc WatchSource:0}: Error finding container 99493dacdb37721eecccf5cfc1bd1bd74e8e4cfcee376e0c05b61cb7913672dc: Status 404 returned error can't find the container with id 99493dacdb37721eecccf5cfc1bd1bd74e8e4cfcee376e0c05b61cb7913672dc Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.827475 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68"] Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.834347 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-564965969-659wf"] Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.839337 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5"] Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.839347 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-z2k6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-56f8bfcd9f-9vgvb_openstack-operators(ff35634f-2b61-44e4-934a-74b39c5b7335): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.841077 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" podUID="ff35634f-2b61-44e4-934a-74b39c5b7335" Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.844942 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch"] Jan 28 11:38:36 crc kubenswrapper[4804]: W0128 11:38:36.847274 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod67fbb1e9_d718_4075_971a_33a245c498e3.slice/crio-d31e572e58c22e888e1498dd489255278c3edaaf22990b079ab700fe74359cb1 WatchSource:0}: Error finding container d31e572e58c22e888e1498dd489255278c3edaaf22990b079ab700fe74359cb1: Status 404 returned error can't find the container with id d31e572e58c22e888e1498dd489255278c3edaaf22990b079ab700fe74359cb1 Jan 28 11:38:36 crc kubenswrapper[4804]: I0128 11:38:36.850062 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn"] Jan 28 11:38:36 crc kubenswrapper[4804]: W0128 11:38:36.850659 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69938639_9ff0_433c_bd73_8d129935e7d4.slice/crio-8457e202d081129357c0a4ea6a3036fb2674cfc6085e2095e13253a6b11561fd WatchSource:0}: Error finding container 8457e202d081129357c0a4ea6a3036fb2674cfc6085e2095e13253a6b11561fd: Status 404 returned error can't find the container with id 8457e202d081129357c0a4ea6a3036fb2674cfc6085e2095e13253a6b11561fd Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.852468 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x74dd,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod watcher-operator-controller-manager-564965969-659wf_openstack-operators(67fbb1e9-d718-4075-971a-33a245c498e3): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.854598 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" podUID="67fbb1e9-d718-4075-971a-33a245c498e3" Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.854722 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-xskb9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-668c99d594-cqlch_openstack-operators(69938639-9ff0-433c-bd73-8d129935e7d4): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.856358 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch" podUID="69938639-9ff0-433c-bd73-8d129935e7d4" Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.859551 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-n9dcn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod swift-operator-controller-manager-68fc8c869-fwd68_openstack-operators(eb1c01a9-6548-49cd-8e1f-4f01daaff754): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 11:38:36 crc kubenswrapper[4804]: W0128 11:38:36.859891 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7ab2436a_1b54_4c5e_bdc1_959026660c98.slice/crio-380b9407abe431c50c621dbca9001fd1bd11837927a8e18809e817990bdcc8d1 WatchSource:0}: Error finding container 380b9407abe431c50c621dbca9001fd1bd11837927a8e18809e817990bdcc8d1: Status 404 returned error can't find the container with id 380b9407abe431c50c621dbca9001fd1bd11837927a8e18809e817990bdcc8d1 Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.860900 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" podUID="eb1c01a9-6548-49cd-8e1f-4f01daaff754" Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.862786 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-229rn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-788c46999f-4cpk5_openstack-operators(7ab2436a-1b54-4c5e-bdc1-959026660c98): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.863942 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" podUID="7ab2436a-1b54-4c5e-bdc1-959026660c98" Jan 28 11:38:36 crc kubenswrapper[4804]: W0128 11:38:36.865552 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb79b961c_583d_4e78_8513_c44ed292c325.slice/crio-7a48f07032ac38185455f7a3181866ec5285d8c1b7e98e27a7426d548368590b WatchSource:0}: Error finding container 7a48f07032ac38185455f7a3181866ec5285d8c1b7e98e27a7426d548368590b: Status 404 returned error can't find the container with id 7a48f07032ac38185455f7a3181866ec5285d8c1b7e98e27a7426d548368590b Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.869999 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-h9rff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod neutron-operator-controller-manager-585dbc889-n9kpn_openstack-operators(b79b961c-583d-4e78-8513-c44ed292c325): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Jan 28 11:38:36 crc kubenswrapper[4804]: E0128 11:38:36.871311 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" podUID="b79b961c-583d-4e78-8513-c44ed292c325" Jan 28 11:38:37 crc kubenswrapper[4804]: I0128 11:38:37.240897 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:37 crc kubenswrapper[4804]: E0128 11:38:37.241005 4804 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 11:38:37 crc kubenswrapper[4804]: I0128 11:38:37.241055 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:37 crc kubenswrapper[4804]: E0128 11:38:37.241073 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:38:39.241056133 +0000 UTC m=+995.035936117 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "metrics-server-cert" not found Jan 28 11:38:37 crc kubenswrapper[4804]: E0128 11:38:37.241275 4804 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 11:38:37 crc kubenswrapper[4804]: E0128 11:38:37.241348 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:38:39.241325782 +0000 UTC m=+995.036205846 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "webhook-server-cert" not found Jan 28 11:38:37 crc kubenswrapper[4804]: I0128 11:38:37.529268 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" event={"ID":"67fbb1e9-d718-4075-971a-33a245c498e3","Type":"ContainerStarted","Data":"d31e572e58c22e888e1498dd489255278c3edaaf22990b079ab700fe74359cb1"} Jan 28 11:38:37 crc kubenswrapper[4804]: I0128 11:38:37.533466 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" event={"ID":"7ab2436a-1b54-4c5e-bdc1-959026660c98","Type":"ContainerStarted","Data":"380b9407abe431c50c621dbca9001fd1bd11837927a8e18809e817990bdcc8d1"} Jan 28 11:38:37 crc kubenswrapper[4804]: I0128 11:38:37.536139 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45" event={"ID":"deece2f8-8c1c-4599-80f4-44e6ec055a18","Type":"ContainerStarted","Data":"c7b16bd5b2eb9279a9e0a4bb6602854b6130872d7e8fc5584f44758f9d427b54"} Jan 28 11:38:37 crc kubenswrapper[4804]: E0128 11:38:37.537272 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" podUID="7ab2436a-1b54-4c5e-bdc1-959026660c98" Jan 28 11:38:37 crc kubenswrapper[4804]: I0128 11:38:37.537550 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj" event={"ID":"23a10136-5079-4838-adf9-6512ccfd5f2c","Type":"ContainerStarted","Data":"5ad8a54ec0f8f56f389eb1896ff7edb2cdef873286746d7837759852648f582c"} Jan 28 11:38:37 crc kubenswrapper[4804]: E0128 11:38:37.539128 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" podUID="67fbb1e9-d718-4075-971a-33a245c498e3" Jan 28 11:38:37 crc kubenswrapper[4804]: I0128 11:38:37.543280 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" event={"ID":"ff35634f-2b61-44e4-934a-74b39c5b7335","Type":"ContainerStarted","Data":"99493dacdb37721eecccf5cfc1bd1bd74e8e4cfcee376e0c05b61cb7913672dc"} Jan 28 11:38:37 crc kubenswrapper[4804]: E0128 11:38:37.545246 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" podUID="ff35634f-2b61-44e4-934a-74b39c5b7335" Jan 28 11:38:37 crc kubenswrapper[4804]: I0128 11:38:37.551697 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" event={"ID":"b79b961c-583d-4e78-8513-c44ed292c325","Type":"ContainerStarted","Data":"7a48f07032ac38185455f7a3181866ec5285d8c1b7e98e27a7426d548368590b"} Jan 28 11:38:37 crc kubenswrapper[4804]: I0128 11:38:37.552707 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch" event={"ID":"69938639-9ff0-433c-bd73-8d129935e7d4","Type":"ContainerStarted","Data":"8457e202d081129357c0a4ea6a3036fb2674cfc6085e2095e13253a6b11561fd"} Jan 28 11:38:37 crc kubenswrapper[4804]: E0128 11:38:37.554531 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" podUID="b79b961c-583d-4e78-8513-c44ed292c325" Jan 28 11:38:37 crc kubenswrapper[4804]: E0128 11:38:37.555104 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch" podUID="69938639-9ff0-433c-bd73-8d129935e7d4" Jan 28 11:38:37 crc kubenswrapper[4804]: I0128 11:38:37.555480 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" event={"ID":"eb1c01a9-6548-49cd-8e1f-4f01daaff754","Type":"ContainerStarted","Data":"6cef4fd47e6f9491fe048baec30457f92241ea7f58b078ee3ec97beea794a7cd"} Jan 28 11:38:37 crc kubenswrapper[4804]: E0128 11:38:37.557756 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" podUID="eb1c01a9-6548-49cd-8e1f-4f01daaff754" Jan 28 11:38:38 crc kubenswrapper[4804]: I0128 11:38:38.364494 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:38:38 crc kubenswrapper[4804]: E0128 11:38:38.364657 4804 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:38 crc kubenswrapper[4804]: E0128 11:38:38.364729 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert podName:f75f08ff-7d3c-4fb4-a366-1c996771a71d nodeName:}" failed. No retries permitted until 2026-01-28 11:38:42.36469294 +0000 UTC m=+998.159572924 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert") pod "infra-operator-controller-manager-79955696d6-wb5k2" (UID: "f75f08ff-7d3c-4fb4-a366-1c996771a71d") : secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:38 crc kubenswrapper[4804]: E0128 11:38:38.566426 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/watcher-operator@sha256:7869203f6f97de780368d507636031090fed3b658d2f7771acbd4481bdfc870b\\\"\"" pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" podUID="67fbb1e9-d718-4075-971a-33a245c498e3" Jan 28 11:38:38 crc kubenswrapper[4804]: E0128 11:38:38.566548 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/swift-operator@sha256:42ad717de1b82267d244b016e5491a5b66a5c3deb6b8c2906a379e1296a2c382\\\"\"" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" podUID="eb1c01a9-6548-49cd-8e1f-4f01daaff754" Jan 28 11:38:38 crc kubenswrapper[4804]: E0128 11:38:38.566660 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch" podUID="69938639-9ff0-433c-bd73-8d129935e7d4" Jan 28 11:38:38 crc kubenswrapper[4804]: E0128 11:38:38.566699 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/neutron-operator@sha256:bbb46b8b3b69fdfad7bafc10a7e88f6ea58bcdc3c91e30beb79e24417d52e0f6\\\"\"" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" podUID="b79b961c-583d-4e78-8513-c44ed292c325" Jan 28 11:38:38 crc kubenswrapper[4804]: E0128 11:38:38.571219 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:ea7b72b648a5bde2eebd804c2a5c1608d448a4892176c1b8d000c1eef4bb92b4\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" podUID="7ab2436a-1b54-4c5e-bdc1-959026660c98" Jan 28 11:38:38 crc kubenswrapper[4804]: E0128 11:38:38.571644 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:3e01e99d3ca1b6c20b1bb015b00cfcbffc584f22a93dc6fe4019d63b813c0241\\\"\"" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" podUID="ff35634f-2b61-44e4-934a-74b39c5b7335" Jan 28 11:38:38 crc kubenswrapper[4804]: I0128 11:38:38.777759 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:38:38 crc kubenswrapper[4804]: E0128 11:38:38.777972 4804 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:38 crc kubenswrapper[4804]: E0128 11:38:38.778061 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert podName:a26075bd-4d23-463a-abe8-575a02ebc9ad nodeName:}" failed. No retries permitted until 2026-01-28 11:38:42.778038227 +0000 UTC m=+998.572918241 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" (UID: "a26075bd-4d23-463a-abe8-575a02ebc9ad") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:39 crc kubenswrapper[4804]: I0128 11:38:39.285747 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:39 crc kubenswrapper[4804]: I0128 11:38:39.285930 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:39 crc kubenswrapper[4804]: E0128 11:38:39.286000 4804 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 11:38:39 crc kubenswrapper[4804]: E0128 11:38:39.286118 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:38:43.2860916 +0000 UTC m=+999.080971584 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "webhook-server-cert" not found Jan 28 11:38:39 crc kubenswrapper[4804]: E0128 11:38:39.286174 4804 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 11:38:39 crc kubenswrapper[4804]: E0128 11:38:39.286268 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:38:43.286245415 +0000 UTC m=+999.081125389 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "metrics-server-cert" not found Jan 28 11:38:42 crc kubenswrapper[4804]: I0128 11:38:42.440795 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:38:42 crc kubenswrapper[4804]: E0128 11:38:42.441016 4804 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:42 crc kubenswrapper[4804]: E0128 11:38:42.441544 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert podName:f75f08ff-7d3c-4fb4-a366-1c996771a71d nodeName:}" failed. No retries permitted until 2026-01-28 11:38:50.44151774 +0000 UTC m=+1006.236397724 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert") pod "infra-operator-controller-manager-79955696d6-wb5k2" (UID: "f75f08ff-7d3c-4fb4-a366-1c996771a71d") : secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:42 crc kubenswrapper[4804]: I0128 11:38:42.848107 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:38:42 crc kubenswrapper[4804]: E0128 11:38:42.848382 4804 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:42 crc kubenswrapper[4804]: E0128 11:38:42.848524 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert podName:a26075bd-4d23-463a-abe8-575a02ebc9ad nodeName:}" failed. No retries permitted until 2026-01-28 11:38:50.848490484 +0000 UTC m=+1006.643370678 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" (UID: "a26075bd-4d23-463a-abe8-575a02ebc9ad") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:43 crc kubenswrapper[4804]: I0128 11:38:43.356685 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:43 crc kubenswrapper[4804]: I0128 11:38:43.356796 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:43 crc kubenswrapper[4804]: E0128 11:38:43.357001 4804 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 11:38:43 crc kubenswrapper[4804]: E0128 11:38:43.357042 4804 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 11:38:43 crc kubenswrapper[4804]: E0128 11:38:43.357088 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:38:51.357062713 +0000 UTC m=+1007.151942707 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "webhook-server-cert" not found Jan 28 11:38:43 crc kubenswrapper[4804]: E0128 11:38:43.357181 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:38:51.357162116 +0000 UTC m=+1007.152042090 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "metrics-server-cert" not found Jan 28 11:38:49 crc kubenswrapper[4804]: E0128 11:38:49.827154 4804 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e" Jan 28 11:38:49 crc kubenswrapper[4804]: E0128 11:38:49.827906 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e,Command:[/manager],Args:[--leader-elect --health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},EnvVar{Name:METRICS_CERTS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-sv2mr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-operator-controller-manager-55bff696bd-dndv5_openstack-operators(8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 11:38:49 crc kubenswrapper[4804]: E0128 11:38:49.829087 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" podUID="8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.485180 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:38:50 crc kubenswrapper[4804]: E0128 11:38:50.485384 4804 secret.go:188] Couldn't get secret openstack-operators/infra-operator-webhook-server-cert: secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:50 crc kubenswrapper[4804]: E0128 11:38:50.485624 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert podName:f75f08ff-7d3c-4fb4-a366-1c996771a71d nodeName:}" failed. No retries permitted until 2026-01-28 11:39:06.485607591 +0000 UTC m=+1022.280487575 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert") pod "infra-operator-controller-manager-79955696d6-wb5k2" (UID: "f75f08ff-7d3c-4fb4-a366-1c996771a71d") : secret "infra-operator-webhook-server-cert" not found Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.665805 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86" event={"ID":"db8796b2-e360-4287-9ba2-4ceda6de770e","Type":"ContainerStarted","Data":"e29209b99c46fdb9842e3b0f93efabe2f0301e4d7d3564b77a8dfaca95b6bd32"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.675987 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d" event={"ID":"c36b33fc-3ff6-4c44-a079-bc48a5a3d509","Type":"ContainerStarted","Data":"ddc53a5f04a33046ae63d900b98b8c4a6bcde97c259d081d7ca78426d74e7f2a"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.676126 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.680047 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj" event={"ID":"23a10136-5079-4838-adf9-6512ccfd5f2c","Type":"ContainerStarted","Data":"7a8189a2d971a4efb202aa5b8be634bba0414514d6edc739073a662c8e2cfad9"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.680907 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.686276 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq" event={"ID":"ba3d9f70-1d55-4ca1-a36f-19047f0a9a6d","Type":"ContainerStarted","Data":"12072f1190dc94a8a9ed29811d2778d960fdc92e7fc25c10465662c4806c1e0b"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.686500 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.696205 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh" event={"ID":"b14a4da9-54a6-4a7c-bd0d-3cf9cd05d048","Type":"ContainerStarted","Data":"71ef32a95976cf58c1d83a22901913052564a7f598676e3e05aa2735a7d3b782"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.696305 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.698595 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86" podStartSLOduration=2.830931078 podStartE2EDuration="16.69856843s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.020422949 +0000 UTC m=+991.815302933" lastFinishedPulling="2026-01-28 11:38:49.888060311 +0000 UTC m=+1005.682940285" observedRunningTime="2026-01-28 11:38:50.69162592 +0000 UTC m=+1006.486505924" watchObservedRunningTime="2026-01-28 11:38:50.69856843 +0000 UTC m=+1006.493448414" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.700478 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng" event={"ID":"8f1a2428-c6c8-4113-9654-0c58ab91b45b","Type":"ContainerStarted","Data":"2a25bd14770dea745e830d6e517cce22d83f4c6838cec4ba671daf87e7fc27c0"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.700724 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.710148 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7" event={"ID":"d5ce0c1e-3061-46ed-a816-3839144b160a","Type":"ContainerStarted","Data":"7add3c1bab2d6c06b9fffd4f9d703efc50adef9e203e7bbd14f785f21739c1c8"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.710465 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.712691 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5" event={"ID":"ec1046a1-b834-40e4-b82a-923885428171","Type":"ContainerStarted","Data":"9c283fa1a070758c836bc59fd9aea2ed6bec7718c11fec1ef7827496d8f3a1fe"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.713359 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.723759 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx" event={"ID":"e770ba97-59e1-4752-8e93-bc7d53ff7c04","Type":"ContainerStarted","Data":"13586ab275c816bc340826486c2c61b3d2058cfb2fd0b170bc80c87f02684f8d"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.723815 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.729827 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d" podStartSLOduration=2.825952971 podStartE2EDuration="16.729802934s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:35.921426119 +0000 UTC m=+991.716306103" lastFinishedPulling="2026-01-28 11:38:49.825276072 +0000 UTC m=+1005.620156066" observedRunningTime="2026-01-28 11:38:50.721278273 +0000 UTC m=+1006.516158257" watchObservedRunningTime="2026-01-28 11:38:50.729802934 +0000 UTC m=+1006.524682918" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.733530 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l" event={"ID":"07990c6c-3350-45a8-85de-1e0db97acb07","Type":"ContainerStarted","Data":"aebe483f195968df63cb2423fa69db1cc0a49b2dc39619a3ae3262b64d8c7e2d"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.734259 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.758435 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl" event={"ID":"186e63a0-88e6-404b-963c-e5cb22485277","Type":"ContainerStarted","Data":"470d98258dce0f4a9e032399bf76f152470ae22c9d51354c9f6ba54ec0d61a6d"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.759191 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.765164 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45" event={"ID":"deece2f8-8c1c-4599-80f4-44e6ec055a18","Type":"ContainerStarted","Data":"b30b7fe392e3b992f519ea4c849f88ee6b8911536434017e14be392b80c40558"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.765465 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.766181 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq" podStartSLOduration=2.930925652 podStartE2EDuration="16.761865866s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.065398761 +0000 UTC m=+991.860278745" lastFinishedPulling="2026-01-28 11:38:49.896338975 +0000 UTC m=+1005.691218959" observedRunningTime="2026-01-28 11:38:50.759396846 +0000 UTC m=+1006.554276830" watchObservedRunningTime="2026-01-28 11:38:50.761865866 +0000 UTC m=+1006.556745850" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.768457 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b" event={"ID":"acdcc5e8-c284-444e-86c2-96aec766b35b","Type":"ContainerStarted","Data":"718f2fd22e8e5601797827fd652a9a8efaca90bdeb4ae8c14dc787e065f418b9"} Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.768487 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b" Jan 28 11:38:50 crc kubenswrapper[4804]: E0128 11:38:50.775402 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/nova-operator@sha256:5340b88039fac393da49ef4e181b2720c809c27a6bb30531a07a49342a1da45e\\\"\"" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" podUID="8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.783566 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj" podStartSLOduration=3.716175167 podStartE2EDuration="16.783544035s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.819159334 +0000 UTC m=+992.614039318" lastFinishedPulling="2026-01-28 11:38:49.886528192 +0000 UTC m=+1005.681408186" observedRunningTime="2026-01-28 11:38:50.782334296 +0000 UTC m=+1006.577214290" watchObservedRunningTime="2026-01-28 11:38:50.783544035 +0000 UTC m=+1006.578424019" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.829356 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl" podStartSLOduration=3.030721498 podStartE2EDuration="16.829339582s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.065310758 +0000 UTC m=+991.860190742" lastFinishedPulling="2026-01-28 11:38:49.863928842 +0000 UTC m=+1005.658808826" observedRunningTime="2026-01-28 11:38:50.821850254 +0000 UTC m=+1006.616730238" watchObservedRunningTime="2026-01-28 11:38:50.829339582 +0000 UTC m=+1006.624219566" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.868753 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7" podStartSLOduration=3.3488241739999998 podStartE2EDuration="16.868734786s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.366645721 +0000 UTC m=+992.161525705" lastFinishedPulling="2026-01-28 11:38:49.886556323 +0000 UTC m=+1005.681436317" observedRunningTime="2026-01-28 11:38:50.862974493 +0000 UTC m=+1006.657854477" watchObservedRunningTime="2026-01-28 11:38:50.868734786 +0000 UTC m=+1006.663614770" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.892529 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:38:50 crc kubenswrapper[4804]: E0128 11:38:50.893394 4804 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:50 crc kubenswrapper[4804]: E0128 11:38:50.893448 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert podName:a26075bd-4d23-463a-abe8-575a02ebc9ad nodeName:}" failed. No retries permitted until 2026-01-28 11:39:06.893433493 +0000 UTC m=+1022.688313477 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert") pod "openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" (UID: "a26075bd-4d23-463a-abe8-575a02ebc9ad") : secret "openstack-baremetal-operator-webhook-server-cert" not found Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.951491 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b" podStartSLOduration=3.422757638 podStartE2EDuration="16.951478931s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.38736804 +0000 UTC m=+992.182248014" lastFinishedPulling="2026-01-28 11:38:49.916089313 +0000 UTC m=+1005.710969307" observedRunningTime="2026-01-28 11:38:50.950265652 +0000 UTC m=+1006.745145636" watchObservedRunningTime="2026-01-28 11:38:50.951478931 +0000 UTC m=+1006.746358915" Jan 28 11:38:50 crc kubenswrapper[4804]: I0128 11:38:50.994364 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx" podStartSLOduration=3.094427917 podStartE2EDuration="16.994346136s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.072461346 +0000 UTC m=+991.867341320" lastFinishedPulling="2026-01-28 11:38:49.972379555 +0000 UTC m=+1005.767259539" observedRunningTime="2026-01-28 11:38:50.982389315 +0000 UTC m=+1006.777269299" watchObservedRunningTime="2026-01-28 11:38:50.994346136 +0000 UTC m=+1006.789226120" Jan 28 11:38:51 crc kubenswrapper[4804]: I0128 11:38:51.042815 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l" podStartSLOduration=3.627408731 podStartE2EDuration="17.042800747s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.443587879 +0000 UTC m=+992.238467863" lastFinishedPulling="2026-01-28 11:38:49.858979895 +0000 UTC m=+1005.653859879" observedRunningTime="2026-01-28 11:38:51.039499822 +0000 UTC m=+1006.834379806" watchObservedRunningTime="2026-01-28 11:38:51.042800747 +0000 UTC m=+1006.837680731" Jan 28 11:38:51 crc kubenswrapper[4804]: I0128 11:38:51.044578 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh" podStartSLOduration=3.514635442 podStartE2EDuration="17.044572354s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.369060267 +0000 UTC m=+992.163940251" lastFinishedPulling="2026-01-28 11:38:49.898997179 +0000 UTC m=+1005.693877163" observedRunningTime="2026-01-28 11:38:51.018191874 +0000 UTC m=+1006.813071858" watchObservedRunningTime="2026-01-28 11:38:51.044572354 +0000 UTC m=+1006.839452338" Jan 28 11:38:51 crc kubenswrapper[4804]: I0128 11:38:51.065330 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng" podStartSLOduration=3.605234016 podStartE2EDuration="17.065310004s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.437787835 +0000 UTC m=+992.232667819" lastFinishedPulling="2026-01-28 11:38:49.897863823 +0000 UTC m=+1005.692743807" observedRunningTime="2026-01-28 11:38:51.060260453 +0000 UTC m=+1006.855140437" watchObservedRunningTime="2026-01-28 11:38:51.065310004 +0000 UTC m=+1006.860189988" Jan 28 11:38:51 crc kubenswrapper[4804]: I0128 11:38:51.098846 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5" podStartSLOduration=3.641333005 podStartE2EDuration="17.098827361s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.439428607 +0000 UTC m=+992.234308591" lastFinishedPulling="2026-01-28 11:38:49.896922963 +0000 UTC m=+1005.691802947" observedRunningTime="2026-01-28 11:38:51.098465999 +0000 UTC m=+1006.893345993" watchObservedRunningTime="2026-01-28 11:38:51.098827361 +0000 UTC m=+1006.893707345" Jan 28 11:38:51 crc kubenswrapper[4804]: I0128 11:38:51.126753 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45" podStartSLOduration=4.064835465 podStartE2EDuration="17.126731179s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.831694373 +0000 UTC m=+992.626574357" lastFinishedPulling="2026-01-28 11:38:49.893590087 +0000 UTC m=+1005.688470071" observedRunningTime="2026-01-28 11:38:51.121452421 +0000 UTC m=+1006.916332405" watchObservedRunningTime="2026-01-28 11:38:51.126731179 +0000 UTC m=+1006.921611153" Jan 28 11:38:51 crc kubenswrapper[4804]: I0128 11:38:51.398771 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:51 crc kubenswrapper[4804]: I0128 11:38:51.398920 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:38:51 crc kubenswrapper[4804]: E0128 11:38:51.398971 4804 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Jan 28 11:38:51 crc kubenswrapper[4804]: E0128 11:38:51.399058 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:39:07.399034967 +0000 UTC m=+1023.193914951 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "webhook-server-cert" not found Jan 28 11:38:51 crc kubenswrapper[4804]: E0128 11:38:51.399073 4804 secret.go:188] Couldn't get secret openstack-operators/metrics-server-cert: secret "metrics-server-cert" not found Jan 28 11:38:51 crc kubenswrapper[4804]: E0128 11:38:51.399130 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs podName:58f748c2-ceb6-4d34-8a2e-8227e59ef560 nodeName:}" failed. No retries permitted until 2026-01-28 11:39:07.399112989 +0000 UTC m=+1023.193993043 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs") pod "openstack-operator-controller-manager-6548796f98-5pssc" (UID: "58f748c2-ceb6-4d34-8a2e-8227e59ef560") : secret "metrics-server-cert" not found Jan 28 11:38:51 crc kubenswrapper[4804]: I0128 11:38:51.785489 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86" Jan 28 11:38:55 crc kubenswrapper[4804]: I0128 11:38:55.141350 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-7dd968899f-wl5w5" Jan 28 11:38:55 crc kubenswrapper[4804]: I0128 11:38:55.150468 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-67bf948998-7dg9l" Jan 28 11:38:55 crc kubenswrapper[4804]: I0128 11:38:55.285485 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-6687f8d877-m5xng" Jan 28 11:38:55 crc kubenswrapper[4804]: I0128 11:38:55.361160 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-5b964cf4cd-bfl45" Jan 28 11:38:55 crc kubenswrapper[4804]: I0128 11:38:55.630385 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-64b5b76f97-2hdgj" Jan 28 11:38:56 crc kubenswrapper[4804]: I0128 11:38:56.824997 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" event={"ID":"b79b961c-583d-4e78-8513-c44ed292c325","Type":"ContainerStarted","Data":"2071539e06121fac45b683f4960e5f02191f22ea849321a689959060ba58da84"} Jan 28 11:38:56 crc kubenswrapper[4804]: I0128 11:38:56.825541 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" Jan 28 11:38:56 crc kubenswrapper[4804]: I0128 11:38:56.845783 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" podStartSLOduration=3.13566418 podStartE2EDuration="22.845767572s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.869745265 +0000 UTC m=+992.664625249" lastFinishedPulling="2026-01-28 11:38:56.579848657 +0000 UTC m=+1012.374728641" observedRunningTime="2026-01-28 11:38:56.842487878 +0000 UTC m=+1012.637367862" watchObservedRunningTime="2026-01-28 11:38:56.845767572 +0000 UTC m=+1012.640647556" Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.834257 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" event={"ID":"eb1c01a9-6548-49cd-8e1f-4f01daaff754","Type":"ContainerStarted","Data":"8bd9abd0f32f837cd1e7801bbf3dfd8378986d84e82fe95de434217bb8de39d6"} Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.834544 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.836674 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" event={"ID":"7ab2436a-1b54-4c5e-bdc1-959026660c98","Type":"ContainerStarted","Data":"197bad519c2ba331f90a2864e8199f2d38c3e84ea8505111cbd8a5a731405ebe"} Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.837736 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.839336 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" event={"ID":"67fbb1e9-d718-4075-971a-33a245c498e3","Type":"ContainerStarted","Data":"0a011c9eaeed72b2c571ebf8dc81aca6924c6c1913857d1b5206f3479caf322e"} Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.839563 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.841162 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" event={"ID":"ff35634f-2b61-44e4-934a-74b39c5b7335","Type":"ContainerStarted","Data":"b22295c754ec883bf9af52effd5d6e7fa59e734486789257cdf0269cd120953f"} Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.841895 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.843448 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch" event={"ID":"69938639-9ff0-433c-bd73-8d129935e7d4","Type":"ContainerStarted","Data":"c7d65a15f2beb57ef107124c59354137804c3bf59dab32fb2bab8f703dcec92d"} Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.865673 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" podStartSLOduration=4.599565566 podStartE2EDuration="23.865654146s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.859415976 +0000 UTC m=+992.654295960" lastFinishedPulling="2026-01-28 11:38:56.125504556 +0000 UTC m=+1011.920384540" observedRunningTime="2026-01-28 11:38:57.864376856 +0000 UTC m=+1013.659256840" watchObservedRunningTime="2026-01-28 11:38:57.865654146 +0000 UTC m=+1013.660534130" Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.916264 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" podStartSLOduration=4.261166544 podStartE2EDuration="23.916237456s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.862652338 +0000 UTC m=+992.657532332" lastFinishedPulling="2026-01-28 11:38:56.51772325 +0000 UTC m=+1012.312603244" observedRunningTime="2026-01-28 11:38:57.884285649 +0000 UTC m=+1013.679165633" watchObservedRunningTime="2026-01-28 11:38:57.916237456 +0000 UTC m=+1013.711117440" Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.930109 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" podStartSLOduration=4.189263077 podStartE2EDuration="23.930083367s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.839091249 +0000 UTC m=+992.633971243" lastFinishedPulling="2026-01-28 11:38:56.579911549 +0000 UTC m=+1012.374791533" observedRunningTime="2026-01-28 11:38:57.913105656 +0000 UTC m=+1013.707985680" watchObservedRunningTime="2026-01-28 11:38:57.930083367 +0000 UTC m=+1013.724963351" Jan 28 11:38:57 crc kubenswrapper[4804]: I0128 11:38:57.935621 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" podStartSLOduration=4.228752162 podStartE2EDuration="23.935604722s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.852293809 +0000 UTC m=+992.647173793" lastFinishedPulling="2026-01-28 11:38:56.559146369 +0000 UTC m=+1012.354026353" observedRunningTime="2026-01-28 11:38:57.93458031 +0000 UTC m=+1013.729460294" watchObservedRunningTime="2026-01-28 11:38:57.935604722 +0000 UTC m=+1013.730484706" Jan 28 11:39:01 crc kubenswrapper[4804]: I0128 11:39:01.919360 4804 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 11:39:01 crc kubenswrapper[4804]: I0128 11:39:01.953619 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-668c99d594-cqlch" podStartSLOduration=7.249079704 podStartE2EDuration="26.95359876s" podCreationTimestamp="2026-01-28 11:38:35 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.85453288 +0000 UTC m=+992.649412864" lastFinishedPulling="2026-01-28 11:38:56.559051936 +0000 UTC m=+1012.353931920" observedRunningTime="2026-01-28 11:38:57.954208535 +0000 UTC m=+1013.749088529" watchObservedRunningTime="2026-01-28 11:39:01.95359876 +0000 UTC m=+1017.748478744" Jan 28 11:39:04 crc kubenswrapper[4804]: I0128 11:39:04.702668 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-8d874c8fc-j5j86" Jan 28 11:39:04 crc kubenswrapper[4804]: I0128 11:39:04.731752 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-7b6c4d8c5f-vjb6d" Jan 28 11:39:04 crc kubenswrapper[4804]: I0128 11:39:04.742204 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-6d9697b7f4-fbggh" Jan 28 11:39:04 crc kubenswrapper[4804]: I0128 11:39:04.759618 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-8886f4c47-qz2dl" Jan 28 11:39:04 crc kubenswrapper[4804]: I0128 11:39:04.799849 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-69d6db494d-hxv8b" Jan 28 11:39:04 crc kubenswrapper[4804]: I0128 11:39:04.855823 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-5fb775575f-fw9dq" Jan 28 11:39:04 crc kubenswrapper[4804]: I0128 11:39:04.969019 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5f4b8bd54d-k6rzx" Jan 28 11:39:04 crc kubenswrapper[4804]: I0128 11:39:04.982777 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-84f48565d4-s92b7" Jan 28 11:39:05 crc kubenswrapper[4804]: I0128 11:39:05.169497 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-585dbc889-n9kpn" Jan 28 11:39:05 crc kubenswrapper[4804]: I0128 11:39:05.293690 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-788c46999f-4cpk5" Jan 28 11:39:05 crc kubenswrapper[4804]: I0128 11:39:05.483236 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-68fc8c869-fwd68" Jan 28 11:39:05 crc kubenswrapper[4804]: I0128 11:39:05.771960 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-56f8bfcd9f-9vgvb" Jan 28 11:39:05 crc kubenswrapper[4804]: I0128 11:39:05.790265 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-564965969-659wf" Jan 28 11:39:06 crc kubenswrapper[4804]: I0128 11:39:06.533264 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:39:06 crc kubenswrapper[4804]: I0128 11:39:06.543758 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f75f08ff-7d3c-4fb4-a366-1c996771a71d-cert\") pod \"infra-operator-controller-manager-79955696d6-wb5k2\" (UID: \"f75f08ff-7d3c-4fb4-a366-1c996771a71d\") " pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:39:06 crc kubenswrapper[4804]: I0128 11:39:06.729571 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-7v747" Jan 28 11:39:06 crc kubenswrapper[4804]: I0128 11:39:06.738197 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:39:06 crc kubenswrapper[4804]: I0128 11:39:06.939784 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:39:06 crc kubenswrapper[4804]: I0128 11:39:06.945285 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a26075bd-4d23-463a-abe8-575a02ebc9ad-cert\") pod \"openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg\" (UID: \"a26075bd-4d23-463a-abe8-575a02ebc9ad\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.112096 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-m8hd9" Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.119542 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.179053 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2"] Jan 28 11:39:07 crc kubenswrapper[4804]: W0128 11:39:07.190246 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf75f08ff_7d3c_4fb4_a366_1c996771a71d.slice/crio-007256377d87bcce0fc771874d67998d685309cd08193cf70ebe2cf298527553 WatchSource:0}: Error finding container 007256377d87bcce0fc771874d67998d685309cd08193cf70ebe2cf298527553: Status 404 returned error can't find the container with id 007256377d87bcce0fc771874d67998d685309cd08193cf70ebe2cf298527553 Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.447214 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.447294 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.451072 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-webhook-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.451199 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/58f748c2-ceb6-4d34-8a2e-8227e59ef560-metrics-certs\") pod \"openstack-operator-controller-manager-6548796f98-5pssc\" (UID: \"58f748c2-ceb6-4d34-8a2e-8227e59ef560\") " pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.492586 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-lnb4p" Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.501091 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:39:07 crc kubenswrapper[4804]: W0128 11:39:07.556584 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda26075bd_4d23_463a_abe8_575a02ebc9ad.slice/crio-e5cc146641de86d5bb828c31180237e006b1e4c12d4716b1100020f379ed2b17 WatchSource:0}: Error finding container e5cc146641de86d5bb828c31180237e006b1e4c12d4716b1100020f379ed2b17: Status 404 returned error can't find the container with id e5cc146641de86d5bb828c31180237e006b1e4c12d4716b1100020f379ed2b17 Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.561346 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg"] Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.726652 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc"] Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.915128 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" event={"ID":"f75f08ff-7d3c-4fb4-a366-1c996771a71d","Type":"ContainerStarted","Data":"007256377d87bcce0fc771874d67998d685309cd08193cf70ebe2cf298527553"} Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.916661 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" event={"ID":"58f748c2-ceb6-4d34-8a2e-8227e59ef560","Type":"ContainerStarted","Data":"1e544f953665c7fe834e4f2624ccece8a156709e56ca10a1f27eaaa71e3309ce"} Jan 28 11:39:07 crc kubenswrapper[4804]: I0128 11:39:07.917531 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" event={"ID":"a26075bd-4d23-463a-abe8-575a02ebc9ad","Type":"ContainerStarted","Data":"e5cc146641de86d5bb828c31180237e006b1e4c12d4716b1100020f379ed2b17"} Jan 28 11:39:08 crc kubenswrapper[4804]: I0128 11:39:08.926229 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" event={"ID":"58f748c2-ceb6-4d34-8a2e-8227e59ef560","Type":"ContainerStarted","Data":"e3cf0f2cb0f2fbd8843dfb8f1bd8759058482fec6da19b034127db5e3fc2398e"} Jan 28 11:39:08 crc kubenswrapper[4804]: I0128 11:39:08.926587 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:39:08 crc kubenswrapper[4804]: I0128 11:39:08.952850 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" podStartSLOduration=33.952830143 podStartE2EDuration="33.952830143s" podCreationTimestamp="2026-01-28 11:38:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:39:08.949039702 +0000 UTC m=+1024.743919696" watchObservedRunningTime="2026-01-28 11:39:08.952830143 +0000 UTC m=+1024.747710127" Jan 28 11:39:13 crc kubenswrapper[4804]: I0128 11:39:13.969979 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" event={"ID":"a26075bd-4d23-463a-abe8-575a02ebc9ad","Type":"ContainerStarted","Data":"f03cfe72bdd8bab0fbb8940737aebd535fd0f2dc1d608dde1bc5d7dbef124231"} Jan 28 11:39:13 crc kubenswrapper[4804]: I0128 11:39:13.970523 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:39:13 crc kubenswrapper[4804]: I0128 11:39:13.971495 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" event={"ID":"f75f08ff-7d3c-4fb4-a366-1c996771a71d","Type":"ContainerStarted","Data":"a2814bc7654130ef669480e154b22a349677f98f5b93509bddd76348efb0826e"} Jan 28 11:39:13 crc kubenswrapper[4804]: I0128 11:39:13.971551 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:39:13 crc kubenswrapper[4804]: I0128 11:39:13.973042 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" event={"ID":"8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1","Type":"ContainerStarted","Data":"1ff2f0b9fe55e9903b74563ce7c5e858365452350570fedfd3b61f25bcca9b0b"} Jan 28 11:39:13 crc kubenswrapper[4804]: I0128 11:39:13.973231 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" Jan 28 11:39:14 crc kubenswrapper[4804]: I0128 11:39:14.001838 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" podStartSLOduration=34.255202407 podStartE2EDuration="40.001822237s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:39:07.5695002 +0000 UTC m=+1023.364380184" lastFinishedPulling="2026-01-28 11:39:13.31612003 +0000 UTC m=+1029.111000014" observedRunningTime="2026-01-28 11:39:13.996496368 +0000 UTC m=+1029.791376372" watchObservedRunningTime="2026-01-28 11:39:14.001822237 +0000 UTC m=+1029.796702211" Jan 28 11:39:14 crc kubenswrapper[4804]: I0128 11:39:14.014705 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" podStartSLOduration=33.891886872 podStartE2EDuration="40.014683897s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:39:07.192226461 +0000 UTC m=+1022.987106445" lastFinishedPulling="2026-01-28 11:39:13.315023486 +0000 UTC m=+1029.109903470" observedRunningTime="2026-01-28 11:39:14.01322809 +0000 UTC m=+1029.808108074" watchObservedRunningTime="2026-01-28 11:39:14.014683897 +0000 UTC m=+1029.809563891" Jan 28 11:39:14 crc kubenswrapper[4804]: I0128 11:39:14.030421 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" podStartSLOduration=4.326266466 podStartE2EDuration="40.030400387s" podCreationTimestamp="2026-01-28 11:38:34 +0000 UTC" firstStartedPulling="2026-01-28 11:38:36.435018446 +0000 UTC m=+992.229898430" lastFinishedPulling="2026-01-28 11:39:12.139152367 +0000 UTC m=+1027.934032351" observedRunningTime="2026-01-28 11:39:14.026450672 +0000 UTC m=+1029.821330656" watchObservedRunningTime="2026-01-28 11:39:14.030400387 +0000 UTC m=+1029.825280371" Jan 28 11:39:17 crc kubenswrapper[4804]: I0128 11:39:17.508714 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-6548796f98-5pssc" Jan 28 11:39:25 crc kubenswrapper[4804]: I0128 11:39:25.227816 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-55bff696bd-dndv5" Jan 28 11:39:26 crc kubenswrapper[4804]: I0128 11:39:26.746580 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-79955696d6-wb5k2" Jan 28 11:39:27 crc kubenswrapper[4804]: I0128 11:39:27.126856 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.204997 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2gsvc"] Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.210394 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.214422 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-87989" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.214736 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.214769 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.214984 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.219718 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2gsvc"] Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.264304 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69x8l"] Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.266079 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.273822 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.276599 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69x8l"] Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.299559 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk9ql\" (UniqueName: \"kubernetes.io/projected/6cc67125-e00e-437f-aa24-de4207035567-kube-api-access-lk9ql\") pod \"dnsmasq-dns-675f4bcbfc-2gsvc\" (UID: \"6cc67125-e00e-437f-aa24-de4207035567\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.299650 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc67125-e00e-437f-aa24-de4207035567-config\") pod \"dnsmasq-dns-675f4bcbfc-2gsvc\" (UID: \"6cc67125-e00e-437f-aa24-de4207035567\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.400919 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc67125-e00e-437f-aa24-de4207035567-config\") pod \"dnsmasq-dns-675f4bcbfc-2gsvc\" (UID: \"6cc67125-e00e-437f-aa24-de4207035567\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.400992 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-config\") pod \"dnsmasq-dns-78dd6ddcc-69x8l\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.401076 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-69x8l\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.401106 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk9ql\" (UniqueName: \"kubernetes.io/projected/6cc67125-e00e-437f-aa24-de4207035567-kube-api-access-lk9ql\") pod \"dnsmasq-dns-675f4bcbfc-2gsvc\" (UID: \"6cc67125-e00e-437f-aa24-de4207035567\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.401149 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccx54\" (UniqueName: \"kubernetes.io/projected/4a7a23b8-853e-4c7e-8865-b4857330ae7a-kube-api-access-ccx54\") pod \"dnsmasq-dns-78dd6ddcc-69x8l\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.402032 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc67125-e00e-437f-aa24-de4207035567-config\") pod \"dnsmasq-dns-675f4bcbfc-2gsvc\" (UID: \"6cc67125-e00e-437f-aa24-de4207035567\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.424227 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk9ql\" (UniqueName: \"kubernetes.io/projected/6cc67125-e00e-437f-aa24-de4207035567-kube-api-access-lk9ql\") pod \"dnsmasq-dns-675f4bcbfc-2gsvc\" (UID: \"6cc67125-e00e-437f-aa24-de4207035567\") " pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.502815 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-69x8l\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.502920 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ccx54\" (UniqueName: \"kubernetes.io/projected/4a7a23b8-853e-4c7e-8865-b4857330ae7a-kube-api-access-ccx54\") pod \"dnsmasq-dns-78dd6ddcc-69x8l\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.502966 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-config\") pod \"dnsmasq-dns-78dd6ddcc-69x8l\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.503704 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-69x8l\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.503806 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-config\") pod \"dnsmasq-dns-78dd6ddcc-69x8l\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.523491 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccx54\" (UniqueName: \"kubernetes.io/projected/4a7a23b8-853e-4c7e-8865-b4857330ae7a-kube-api-access-ccx54\") pod \"dnsmasq-dns-78dd6ddcc-69x8l\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.540253 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.583872 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:39:41 crc kubenswrapper[4804]: I0128 11:39:41.788106 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2gsvc"] Jan 28 11:39:42 crc kubenswrapper[4804]: W0128 11:39:42.078448 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4a7a23b8_853e_4c7e_8865_b4857330ae7a.slice/crio-ff14b73376c08a9a042ed15e0e4d81fca52289921ebc9fbc4e5045162772a04e WatchSource:0}: Error finding container ff14b73376c08a9a042ed15e0e4d81fca52289921ebc9fbc4e5045162772a04e: Status 404 returned error can't find the container with id ff14b73376c08a9a042ed15e0e4d81fca52289921ebc9fbc4e5045162772a04e Jan 28 11:39:42 crc kubenswrapper[4804]: I0128 11:39:42.081294 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69x8l"] Jan 28 11:39:42 crc kubenswrapper[4804]: I0128 11:39:42.169773 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" event={"ID":"4a7a23b8-853e-4c7e-8865-b4857330ae7a","Type":"ContainerStarted","Data":"ff14b73376c08a9a042ed15e0e4d81fca52289921ebc9fbc4e5045162772a04e"} Jan 28 11:39:42 crc kubenswrapper[4804]: I0128 11:39:42.170841 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" event={"ID":"6cc67125-e00e-437f-aa24-de4207035567","Type":"ContainerStarted","Data":"80b9eeef4de23f3be32b5f9a2473b5327cc05d25c610781a1840375e074ddb02"} Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.470407 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2gsvc"] Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.500319 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6pb25"] Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.501428 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.516110 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6pb25"] Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.644631 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-6pb25\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.644681 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-config\") pod \"dnsmasq-dns-5ccc8479f9-6pb25\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.644713 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8t9pp\" (UniqueName: \"kubernetes.io/projected/303230dd-ae75-4c0f-abb8-be1086a098c5-kube-api-access-8t9pp\") pod \"dnsmasq-dns-5ccc8479f9-6pb25\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.746101 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-6pb25\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.746141 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-config\") pod \"dnsmasq-dns-5ccc8479f9-6pb25\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.746169 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8t9pp\" (UniqueName: \"kubernetes.io/projected/303230dd-ae75-4c0f-abb8-be1086a098c5-kube-api-access-8t9pp\") pod \"dnsmasq-dns-5ccc8479f9-6pb25\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.747290 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-config\") pod \"dnsmasq-dns-5ccc8479f9-6pb25\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.748641 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-dns-svc\") pod \"dnsmasq-dns-5ccc8479f9-6pb25\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.785684 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8t9pp\" (UniqueName: \"kubernetes.io/projected/303230dd-ae75-4c0f-abb8-be1086a098c5-kube-api-access-8t9pp\") pod \"dnsmasq-dns-5ccc8479f9-6pb25\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:39:43 crc kubenswrapper[4804]: I0128 11:39:43.824020 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.166943 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6pb25"] Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.177236 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69x8l"] Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.220486 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xc7n9"] Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.225062 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.234863 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xc7n9"] Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.360463 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-xc7n9\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.360508 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-config\") pod \"dnsmasq-dns-57d769cc4f-xc7n9\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.360789 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdvcs\" (UniqueName: \"kubernetes.io/projected/2bf63c78-fb1d-4777-9643-0923cf3a4c57-kube-api-access-fdvcs\") pod \"dnsmasq-dns-57d769cc4f-xc7n9\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.463047 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdvcs\" (UniqueName: \"kubernetes.io/projected/2bf63c78-fb1d-4777-9643-0923cf3a4c57-kube-api-access-fdvcs\") pod \"dnsmasq-dns-57d769cc4f-xc7n9\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.463723 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-xc7n9\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.463757 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-config\") pod \"dnsmasq-dns-57d769cc4f-xc7n9\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.464933 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-config\") pod \"dnsmasq-dns-57d769cc4f-xc7n9\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.465388 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-xc7n9\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.501173 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdvcs\" (UniqueName: \"kubernetes.io/projected/2bf63c78-fb1d-4777-9643-0923cf3a4c57-kube-api-access-fdvcs\") pod \"dnsmasq-dns-57d769cc4f-xc7n9\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.560543 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.643867 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.645419 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.649238 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.652922 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.653123 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.653318 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-lq4ln" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.653323 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.653371 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.653495 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.658325 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.781625 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.781676 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.781704 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.781733 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.781866 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.781973 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76d127f1-97d9-4552-9bdb-b3482a45951d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.782020 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76d127f1-97d9-4552-9bdb-b3482a45951d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.782053 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.782181 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzs9j\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-kube-api-access-gzs9j\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.782292 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.782341 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.884449 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76d127f1-97d9-4552-9bdb-b3482a45951d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.884500 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.884534 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzs9j\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-kube-api-access-gzs9j\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.884576 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.884602 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.884662 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.884686 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.884714 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.884749 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.884767 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.884788 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76d127f1-97d9-4552-9bdb-b3482a45951d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.886542 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.887697 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.892807 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.894041 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.895684 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.898237 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.898668 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.916395 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76d127f1-97d9-4552-9bdb-b3482a45951d-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.917431 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76d127f1-97d9-4552-9bdb-b3482a45951d-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.918270 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzs9j\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-kube-api-access-gzs9j\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.929272 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.940618 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xc7n9"] Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.951364 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:44 crc kubenswrapper[4804]: I0128 11:39:44.969757 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.225901 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" event={"ID":"2bf63c78-fb1d-4777-9643-0923cf3a4c57","Type":"ContainerStarted","Data":"d7d8077111dc71deae67122e06d45da608d04892783ac52603e5acfd01f98f37"} Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.227784 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" event={"ID":"303230dd-ae75-4c0f-abb8-be1086a098c5","Type":"ContainerStarted","Data":"fb24c3a897ceddd1a2c22ed7950667aa2df40c1a865bdacebfbaa2864376b059"} Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.383709 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.401965 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.411837 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.415740 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-gp8xk" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.416281 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.418041 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.418245 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.418289 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.418353 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.427830 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.428164 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.501609 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.501655 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.501730 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.501753 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqhxr\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-kube-api-access-wqhxr\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.501867 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.501997 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7c5c969-c4c2-4f76-b3c6-152473159e78-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.502064 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.502169 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.502259 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7c5c969-c4c2-4f76-b3c6-152473159e78-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.502288 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.502374 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.604060 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7c5c969-c4c2-4f76-b3c6-152473159e78-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.604117 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.604153 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.604183 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.604206 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.604278 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.604310 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqhxr\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-kube-api-access-wqhxr\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.604338 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.604361 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7c5c969-c4c2-4f76-b3c6-152473159e78-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.604396 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.604439 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.608173 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.611106 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.611373 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.611706 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-server-conf\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.611777 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.614397 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.622169 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.622755 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7c5c969-c4c2-4f76-b3c6-152473159e78-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.624249 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.641928 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7c5c969-c4c2-4f76-b3c6-152473159e78-pod-info\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.644846 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqhxr\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-kube-api-access-wqhxr\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.670388 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"rabbitmq-server-0\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " pod="openstack/rabbitmq-server-0" Jan 28 11:39:45 crc kubenswrapper[4804]: I0128 11:39:45.746804 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.264612 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"76d127f1-97d9-4552-9bdb-b3482a45951d","Type":"ContainerStarted","Data":"304507b474cdd7086e7df033bc16291530ac6b5f55a2e85e565b86562e7fde59"} Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.794651 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.801856 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.810124 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-vwdrn" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.811208 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.815340 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.815600 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.838734 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.850746 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.936199 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.936242 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.936264 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-kolla-config\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.936283 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.936740 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-default\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.936789 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.936828 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb78m\" (UniqueName: \"kubernetes.io/projected/33d56e9c-416a-4816-81a7-8def89c20c8e-kube-api-access-fb78m\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:46 crc kubenswrapper[4804]: I0128 11:39:46.936872 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.038872 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fb78m\" (UniqueName: \"kubernetes.io/projected/33d56e9c-416a-4816-81a7-8def89c20c8e-kube-api-access-fb78m\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.038964 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.038991 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.039007 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-kolla-config\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.039021 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.039038 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.039079 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-default\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.039118 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.041097 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-generated\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.042276 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-default\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.042574 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-operator-scripts\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.042704 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.043171 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-kolla-config\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.059147 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.061202 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.072451 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.076603 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb78m\" (UniqueName: \"kubernetes.io/projected/33d56e9c-416a-4816-81a7-8def89c20c8e-kube-api-access-fb78m\") pod \"openstack-galera-0\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " pod="openstack/openstack-galera-0" Jan 28 11:39:47 crc kubenswrapper[4804]: I0128 11:39:47.154647 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.103701 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.116381 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.122339 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.124188 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.124588 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-rk5hn" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.128552 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.133195 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.177696 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.178909 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.179052 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.179113 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.179160 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.179239 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.179275 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8gt7\" (UniqueName: \"kubernetes.io/projected/24549b02-2977-49ee-8f25-a6ed25e523d1-kube-api-access-r8gt7\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.179400 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.280970 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.281044 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.281083 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.281139 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.281182 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.281213 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.281324 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.281356 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r8gt7\" (UniqueName: \"kubernetes.io/projected/24549b02-2977-49ee-8f25-a6ed25e523d1-kube-api-access-r8gt7\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.282387 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.283864 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.345873 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.350278 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.350418 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.397189 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.397476 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.398527 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r8gt7\" (UniqueName: \"kubernetes.io/projected/24549b02-2977-49ee-8f25-a6ed25e523d1-kube-api-access-r8gt7\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.409546 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"openstack-cell1-galera-0\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.465984 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.505568 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.506603 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.515102 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-zbd85" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.515402 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.515685 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.525544 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.588647 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.588699 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flxcf\" (UniqueName: \"kubernetes.io/projected/d47089ce-8b52-4bd3-a30e-04736fed01fc-kube-api-access-flxcf\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.588753 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-config-data\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.588787 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-kolla-config\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.588827 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.690341 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-kolla-config\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.690444 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.690504 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.690529 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flxcf\" (UniqueName: \"kubernetes.io/projected/d47089ce-8b52-4bd3-a30e-04736fed01fc-kube-api-access-flxcf\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.690597 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-config-data\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.691744 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-config-data\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.692431 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-kolla-config\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.697571 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-combined-ca-bundle\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.718210 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-memcached-tls-certs\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.727275 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flxcf\" (UniqueName: \"kubernetes.io/projected/d47089ce-8b52-4bd3-a30e-04736fed01fc-kube-api-access-flxcf\") pod \"memcached-0\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " pod="openstack/memcached-0" Jan 28 11:39:48 crc kubenswrapper[4804]: I0128 11:39:48.828929 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 11:39:50 crc kubenswrapper[4804]: I0128 11:39:50.308537 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:39:50 crc kubenswrapper[4804]: I0128 11:39:50.310931 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 11:39:50 crc kubenswrapper[4804]: I0128 11:39:50.315076 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-t7p54" Jan 28 11:39:50 crc kubenswrapper[4804]: I0128 11:39:50.332943 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:39:50 crc kubenswrapper[4804]: I0128 11:39:50.433814 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r249v\" (UniqueName: \"kubernetes.io/projected/97a6e239-25e0-4962-8c9d-4751ca2f4b1d-kube-api-access-r249v\") pod \"kube-state-metrics-0\" (UID: \"97a6e239-25e0-4962-8c9d-4751ca2f4b1d\") " pod="openstack/kube-state-metrics-0" Jan 28 11:39:50 crc kubenswrapper[4804]: I0128 11:39:50.535526 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r249v\" (UniqueName: \"kubernetes.io/projected/97a6e239-25e0-4962-8c9d-4751ca2f4b1d-kube-api-access-r249v\") pod \"kube-state-metrics-0\" (UID: \"97a6e239-25e0-4962-8c9d-4751ca2f4b1d\") " pod="openstack/kube-state-metrics-0" Jan 28 11:39:50 crc kubenswrapper[4804]: I0128 11:39:50.566829 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r249v\" (UniqueName: \"kubernetes.io/projected/97a6e239-25e0-4962-8c9d-4751ca2f4b1d-kube-api-access-r249v\") pod \"kube-state-metrics-0\" (UID: \"97a6e239-25e0-4962-8c9d-4751ca2f4b1d\") " pod="openstack/kube-state-metrics-0" Jan 28 11:39:50 crc kubenswrapper[4804]: I0128 11:39:50.640449 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.657644 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xtdr8"] Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.660356 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.663285 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-9cq62" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.666571 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.667547 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.680128 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.680179 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-ovn-controller-tls-certs\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.680207 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frdmr\" (UniqueName: \"kubernetes.io/projected/ec6a5a02-2cbe-421b-bcf5-54572e000f28-kube-api-access-frdmr\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.680287 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-combined-ca-bundle\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.680332 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-log-ovn\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.680365 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run-ovn\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.680420 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec6a5a02-2cbe-421b-bcf5-54572e000f28-scripts\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.688048 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xtdr8"] Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.746785 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-pfzkj"] Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.748497 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.754697 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pfzkj"] Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786341 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djwbx\" (UniqueName: \"kubernetes.io/projected/9d301959-ed06-4b22-8e97-f3fc9a9bc491-kube-api-access-djwbx\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786442 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec6a5a02-2cbe-421b-bcf5-54572e000f28-scripts\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786497 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-log\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786546 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786574 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-ovn-controller-tls-certs\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786600 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frdmr\" (UniqueName: \"kubernetes.io/projected/ec6a5a02-2cbe-421b-bcf5-54572e000f28-kube-api-access-frdmr\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786633 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-run\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786673 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-combined-ca-bundle\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786731 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-log-ovn\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786757 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d301959-ed06-4b22-8e97-f3fc9a9bc491-scripts\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786792 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run-ovn\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786821 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-lib\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.786848 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-etc-ovs\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.788346 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.789213 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run-ovn\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.789383 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-log-ovn\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.791449 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec6a5a02-2cbe-421b-bcf5-54572e000f28-scripts\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.799581 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-combined-ca-bundle\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.800184 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-ovn-controller-tls-certs\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.822670 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frdmr\" (UniqueName: \"kubernetes.io/projected/ec6a5a02-2cbe-421b-bcf5-54572e000f28-kube-api-access-frdmr\") pod \"ovn-controller-xtdr8\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.892768 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-lib\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.893040 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-etc-ovs\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.893145 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djwbx\" (UniqueName: \"kubernetes.io/projected/9d301959-ed06-4b22-8e97-f3fc9a9bc491-kube-api-access-djwbx\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.893256 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-log\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.893354 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-run\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.893465 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d301959-ed06-4b22-8e97-f3fc9a9bc491-scripts\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.894148 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-lib\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.894347 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-etc-ovs\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.894411 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-log\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.894513 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-run\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.896769 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d301959-ed06-4b22-8e97-f3fc9a9bc491-scripts\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.921447 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djwbx\" (UniqueName: \"kubernetes.io/projected/9d301959-ed06-4b22-8e97-f3fc9a9bc491-kube-api-access-djwbx\") pod \"ovn-controller-ovs-pfzkj\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:52 crc kubenswrapper[4804]: I0128 11:39:52.987751 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8" Jan 28 11:39:53 crc kubenswrapper[4804]: I0128 11:39:53.064087 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.605817 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.607336 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.611728 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-q2swt" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.612066 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.612226 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.612063 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.612406 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.678493 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.729778 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.729858 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.729920 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.729958 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.730056 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.730090 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9rrd\" (UniqueName: \"kubernetes.io/projected/c6c76352-2487-4098-bbee-579834052292-kube-api-access-x9rrd\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.730120 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c6c76352-2487-4098-bbee-579834052292-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.730144 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-config\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.832934 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x9rrd\" (UniqueName: \"kubernetes.io/projected/c6c76352-2487-4098-bbee-579834052292-kube-api-access-x9rrd\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.833004 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.833052 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c6c76352-2487-4098-bbee-579834052292-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.833072 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-config\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.833176 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.833233 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.833286 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.833321 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.833761 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c6c76352-2487-4098-bbee-579834052292-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.834036 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") device mount path \"/mnt/openstack/pv05\"" pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.834557 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-config\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.835247 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.841183 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.842505 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.844220 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.857832 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x9rrd\" (UniqueName: \"kubernetes.io/projected/c6c76352-2487-4098-bbee-579834052292-kube-api-access-x9rrd\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.874257 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"ovsdbserver-nb-0\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:54 crc kubenswrapper[4804]: I0128 11:39:54.943954 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.628584 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.640558 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.643724 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.644042 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.644287 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.644723 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-mmrcs" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.652202 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.796924 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.797016 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.797093 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.797138 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.797171 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.797207 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.797248 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-config\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.797273 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp6hq\" (UniqueName: \"kubernetes.io/projected/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-kube-api-access-lp6hq\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.899328 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.899420 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.899465 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.899509 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.899556 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lp6hq\" (UniqueName: \"kubernetes.io/projected/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-kube-api-access-lp6hq\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.899580 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-config\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.899647 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.899680 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.899794 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.903308 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-config\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.903678 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.905357 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.906235 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.911231 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.917656 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lp6hq\" (UniqueName: \"kubernetes.io/projected/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-kube-api-access-lp6hq\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.917773 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.924138 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " pod="openstack/ovsdbserver-sb-0" Jan 28 11:39:57 crc kubenswrapper[4804]: I0128 11:39:57.992542 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.070396 4804 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.071269 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-lk9ql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-2gsvc_openstack(6cc67125-e00e-437f-aa24-de4207035567): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.073088 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" podUID="6cc67125-e00e-437f-aa24-de4207035567" Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.132410 4804 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.132920 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ccx54,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-69x8l_openstack(4a7a23b8-853e-4c7e-8865-b4857330ae7a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.134142 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" podUID="4a7a23b8-853e-4c7e-8865-b4857330ae7a" Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.298022 4804 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.298199 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fdvcs,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-xc7n9_openstack(2bf63c78-fb1d-4777-9643-0923cf3a4c57): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.299665 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" podUID="2bf63c78-fb1d-4777-9643-0923cf3a4c57" Jan 28 11:40:08 crc kubenswrapper[4804]: I0128 11:40:08.460675 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 11:40:08 crc kubenswrapper[4804]: W0128 11:40:08.475390 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24549b02_2977_49ee_8f25_a6ed25e523d1.slice/crio-b7ae3c5cd3fc37a1e5fff03dc9d1c7b30b19148db61f795f2d045d947ed549b4 WatchSource:0}: Error finding container b7ae3c5cd3fc37a1e5fff03dc9d1c7b30b19148db61f795f2d045d947ed549b4: Status 404 returned error can't find the container with id b7ae3c5cd3fc37a1e5fff03dc9d1c7b30b19148db61f795f2d045d947ed549b4 Jan 28 11:40:08 crc kubenswrapper[4804]: I0128 11:40:08.507862 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"24549b02-2977-49ee-8f25-a6ed25e523d1","Type":"ContainerStarted","Data":"b7ae3c5cd3fc37a1e5fff03dc9d1c7b30b19148db61f795f2d045d947ed549b4"} Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.509725 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" podUID="2bf63c78-fb1d-4777-9643-0923cf3a4c57" Jan 28 11:40:08 crc kubenswrapper[4804]: I0128 11:40:08.563115 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 11:40:08 crc kubenswrapper[4804]: I0128 11:40:08.589378 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.673639 4804 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.674211 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nfdh5dfhb6h64h676hc4h78h97h669h54chfbh696hb5h54bh5d4h6bh64h644h677h584h5cbh698h9dh5bbh5f8h5b8hcdh644h5c7h694hbfh589q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8t9pp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-5ccc8479f9-6pb25_openstack(303230dd-ae75-4c0f-abb8-be1086a098c5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 11:40:08 crc kubenswrapper[4804]: E0128 11:40:08.676057 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" podUID="303230dd-ae75-4c0f-abb8-be1086a098c5" Jan 28 11:40:08 crc kubenswrapper[4804]: I0128 11:40:08.718084 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xtdr8"] Jan 28 11:40:08 crc kubenswrapper[4804]: I0128 11:40:08.764001 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Jan 28 11:40:08 crc kubenswrapper[4804]: I0128 11:40:08.868343 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:40:08 crc kubenswrapper[4804]: I0128 11:40:08.961590 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-pfzkj"] Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.017058 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.033408 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.147346 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.176311 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-dns-svc\") pod \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.176462 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ccx54\" (UniqueName: \"kubernetes.io/projected/4a7a23b8-853e-4c7e-8865-b4857330ae7a-kube-api-access-ccx54\") pod \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.176593 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk9ql\" (UniqueName: \"kubernetes.io/projected/6cc67125-e00e-437f-aa24-de4207035567-kube-api-access-lk9ql\") pod \"6cc67125-e00e-437f-aa24-de4207035567\" (UID: \"6cc67125-e00e-437f-aa24-de4207035567\") " Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.176626 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc67125-e00e-437f-aa24-de4207035567-config\") pod \"6cc67125-e00e-437f-aa24-de4207035567\" (UID: \"6cc67125-e00e-437f-aa24-de4207035567\") " Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.176653 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-config\") pod \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\" (UID: \"4a7a23b8-853e-4c7e-8865-b4857330ae7a\") " Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.177549 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6cc67125-e00e-437f-aa24-de4207035567-config" (OuterVolumeSpecName: "config") pod "6cc67125-e00e-437f-aa24-de4207035567" (UID: "6cc67125-e00e-437f-aa24-de4207035567"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.177558 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4a7a23b8-853e-4c7e-8865-b4857330ae7a" (UID: "4a7a23b8-853e-4c7e-8865-b4857330ae7a"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.179027 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-config" (OuterVolumeSpecName: "config") pod "4a7a23b8-853e-4c7e-8865-b4857330ae7a" (UID: "4a7a23b8-853e-4c7e-8865-b4857330ae7a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.182440 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a7a23b8-853e-4c7e-8865-b4857330ae7a-kube-api-access-ccx54" (OuterVolumeSpecName: "kube-api-access-ccx54") pod "4a7a23b8-853e-4c7e-8865-b4857330ae7a" (UID: "4a7a23b8-853e-4c7e-8865-b4857330ae7a"). InnerVolumeSpecName "kube-api-access-ccx54". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.246578 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cc67125-e00e-437f-aa24-de4207035567-kube-api-access-lk9ql" (OuterVolumeSpecName: "kube-api-access-lk9ql") pod "6cc67125-e00e-437f-aa24-de4207035567" (UID: "6cc67125-e00e-437f-aa24-de4207035567"). InnerVolumeSpecName "kube-api-access-lk9ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.279507 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.279556 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ccx54\" (UniqueName: \"kubernetes.io/projected/4a7a23b8-853e-4c7e-8865-b4857330ae7a-kube-api-access-ccx54\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.279570 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk9ql\" (UniqueName: \"kubernetes.io/projected/6cc67125-e00e-437f-aa24-de4207035567-kube-api-access-lk9ql\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.279582 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6cc67125-e00e-437f-aa24-de4207035567-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.279594 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a7a23b8-853e-4c7e-8865-b4857330ae7a-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.517503 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xtdr8" event={"ID":"ec6a5a02-2cbe-421b-bcf5-54572e000f28","Type":"ContainerStarted","Data":"33b738bafa7ea125cb6f8e21be749a37e8dc0b050b5dffa31b3e9875c08ddd2d"} Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.519296 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d47089ce-8b52-4bd3-a30e-04736fed01fc","Type":"ContainerStarted","Data":"86818d705a40c4508845f5e3530cd1a2ecd08917ac1287e69fd364a076602c00"} Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.520275 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"50c4ac86-3241-4cd1-aa15-9a36b6be1e03","Type":"ContainerStarted","Data":"25c9a781686743f7412ee94f0767d676a774f06512184aef56e510538efe72e7"} Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.521779 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7c5c969-c4c2-4f76-b3c6-152473159e78","Type":"ContainerStarted","Data":"a5146612f4e2d80705681617c2e405b8c7dbe80637772da2d39bae9bb807359c"} Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.522975 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"33d56e9c-416a-4816-81a7-8def89c20c8e","Type":"ContainerStarted","Data":"a6f77cd6c96b39492fe76acbd919310cca2dbd61ed6cf94d721e54f9cb0227d1"} Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.524255 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"97a6e239-25e0-4962-8c9d-4751ca2f4b1d","Type":"ContainerStarted","Data":"e28d6e15bb8b7864184a210b8a21979cfee4c6a5d5b942d21fe32b6ed7b6e02c"} Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.525762 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" event={"ID":"4a7a23b8-853e-4c7e-8865-b4857330ae7a","Type":"ContainerDied","Data":"ff14b73376c08a9a042ed15e0e4d81fca52289921ebc9fbc4e5045162772a04e"} Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.525839 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-69x8l" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.527051 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.527000 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-2gsvc" event={"ID":"6cc67125-e00e-437f-aa24-de4207035567","Type":"ContainerDied","Data":"80b9eeef4de23f3be32b5f9a2473b5327cc05d25c610781a1840375e074ddb02"} Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.528093 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pfzkj" event={"ID":"9d301959-ed06-4b22-8e97-f3fc9a9bc491","Type":"ContainerStarted","Data":"2ef238b63ba108007593ebb8599aaea3fae02c4b5040dd8085355ce0141a6ab3"} Jan 28 11:40:09 crc kubenswrapper[4804]: E0128 11:40:09.530372 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" podUID="303230dd-ae75-4c0f-abb8-be1086a098c5" Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.609397 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2gsvc"] Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.624980 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-2gsvc"] Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.642919 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69x8l"] Jan 28 11:40:09 crc kubenswrapper[4804]: I0128 11:40:09.647230 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-69x8l"] Jan 28 11:40:10 crc kubenswrapper[4804]: I0128 11:40:10.075053 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 11:40:10 crc kubenswrapper[4804]: W0128 11:40:10.083057 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6c76352_2487_4098_bbee_579834052292.slice/crio-d26150d6a52bf056130226acaed9cb7292060f7026f2494687c5bc4ee4c04771 WatchSource:0}: Error finding container d26150d6a52bf056130226acaed9cb7292060f7026f2494687c5bc4ee4c04771: Status 404 returned error can't find the container with id d26150d6a52bf056130226acaed9cb7292060f7026f2494687c5bc4ee4c04771 Jan 28 11:40:10 crc kubenswrapper[4804]: I0128 11:40:10.541048 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c6c76352-2487-4098-bbee-579834052292","Type":"ContainerStarted","Data":"d26150d6a52bf056130226acaed9cb7292060f7026f2494687c5bc4ee4c04771"} Jan 28 11:40:10 crc kubenswrapper[4804]: I0128 11:40:10.930582 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a7a23b8-853e-4c7e-8865-b4857330ae7a" path="/var/lib/kubelet/pods/4a7a23b8-853e-4c7e-8865-b4857330ae7a/volumes" Jan 28 11:40:10 crc kubenswrapper[4804]: I0128 11:40:10.931535 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cc67125-e00e-437f-aa24-de4207035567" path="/var/lib/kubelet/pods/6cc67125-e00e-437f-aa24-de4207035567/volumes" Jan 28 11:40:11 crc kubenswrapper[4804]: I0128 11:40:11.550225 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7c5c969-c4c2-4f76-b3c6-152473159e78","Type":"ContainerStarted","Data":"b936b1f85b5d914a16d472ff712a5db48c0674a29e82c956ccf023610946a7cb"} Jan 28 11:40:11 crc kubenswrapper[4804]: I0128 11:40:11.554635 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"76d127f1-97d9-4552-9bdb-b3482a45951d","Type":"ContainerStarted","Data":"938917cd0b60c23765326c3b0e216a34a5756c286f26d1223873445f92cad09a"} Jan 28 11:40:12 crc kubenswrapper[4804]: I0128 11:40:12.582798 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:40:12 crc kubenswrapper[4804]: I0128 11:40:12.583150 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.619716 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"24549b02-2977-49ee-8f25-a6ed25e523d1","Type":"ContainerStarted","Data":"71511ac2cacaf27ae221597c51e8a13319dc222d2cd450901bd6db686f0e4b92"} Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.622656 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"97a6e239-25e0-4962-8c9d-4751ca2f4b1d","Type":"ContainerStarted","Data":"bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a"} Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.622808 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.625247 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xtdr8" event={"ID":"ec6a5a02-2cbe-421b-bcf5-54572e000f28","Type":"ContainerStarted","Data":"4a2eea6008d67570b3d18ca463796d41c0886d498dab2d5b7ee01d2e5f0bd61d"} Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.625464 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-xtdr8" Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.627740 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pfzkj" event={"ID":"9d301959-ed06-4b22-8e97-f3fc9a9bc491","Type":"ContainerStarted","Data":"67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d"} Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.629411 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d47089ce-8b52-4bd3-a30e-04736fed01fc","Type":"ContainerStarted","Data":"386c42bab4089fa2791b36fa5e66b769af00f0ef8e73fa961d1e6e6f38f01759"} Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.629564 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.631069 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c6c76352-2487-4098-bbee-579834052292","Type":"ContainerStarted","Data":"445cd2aea23cf7159b1e5bbce268d62cd1c9a1d5072f21d98a9181a420bf2e56"} Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.632801 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"50c4ac86-3241-4cd1-aa15-9a36b6be1e03","Type":"ContainerStarted","Data":"1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2"} Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.634400 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"33d56e9c-416a-4816-81a7-8def89c20c8e","Type":"ContainerStarted","Data":"111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340"} Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.663801 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=19.297823707 podStartE2EDuration="29.663775846s" podCreationTimestamp="2026-01-28 11:39:50 +0000 UTC" firstStartedPulling="2026-01-28 11:40:08.878329354 +0000 UTC m=+1084.673209338" lastFinishedPulling="2026-01-28 11:40:19.244281483 +0000 UTC m=+1095.039161477" observedRunningTime="2026-01-28 11:40:19.660432089 +0000 UTC m=+1095.455312093" watchObservedRunningTime="2026-01-28 11:40:19.663775846 +0000 UTC m=+1095.458655830" Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.707450 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=22.310888543 podStartE2EDuration="31.707434225s" podCreationTimestamp="2026-01-28 11:39:48 +0000 UTC" firstStartedPulling="2026-01-28 11:40:08.776439101 +0000 UTC m=+1084.571319085" lastFinishedPulling="2026-01-28 11:40:18.172984773 +0000 UTC m=+1093.967864767" observedRunningTime="2026-01-28 11:40:19.705521545 +0000 UTC m=+1095.500401529" watchObservedRunningTime="2026-01-28 11:40:19.707434225 +0000 UTC m=+1095.502314199" Jan 28 11:40:19 crc kubenswrapper[4804]: I0128 11:40:19.735584 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xtdr8" podStartSLOduration=18.445745976 podStartE2EDuration="27.735565371s" podCreationTimestamp="2026-01-28 11:39:52 +0000 UTC" firstStartedPulling="2026-01-28 11:40:08.729308511 +0000 UTC m=+1084.524188495" lastFinishedPulling="2026-01-28 11:40:18.019127906 +0000 UTC m=+1093.814007890" observedRunningTime="2026-01-28 11:40:19.730041615 +0000 UTC m=+1095.524921609" watchObservedRunningTime="2026-01-28 11:40:19.735565371 +0000 UTC m=+1095.530445355" Jan 28 11:40:20 crc kubenswrapper[4804]: I0128 11:40:20.660530 4804 generic.go:334] "Generic (PLEG): container finished" podID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerID="67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d" exitCode=0 Jan 28 11:40:20 crc kubenswrapper[4804]: I0128 11:40:20.660765 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pfzkj" event={"ID":"9d301959-ed06-4b22-8e97-f3fc9a9bc491","Type":"ContainerDied","Data":"67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d"} Jan 28 11:40:21 crc kubenswrapper[4804]: I0128 11:40:21.670009 4804 generic.go:334] "Generic (PLEG): container finished" podID="2bf63c78-fb1d-4777-9643-0923cf3a4c57" containerID="947d66683b5919a8fc79f8185327afec7ca654eb64f29ebeca3808e87dd0b6ca" exitCode=0 Jan 28 11:40:21 crc kubenswrapper[4804]: I0128 11:40:21.670103 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" event={"ID":"2bf63c78-fb1d-4777-9643-0923cf3a4c57","Type":"ContainerDied","Data":"947d66683b5919a8fc79f8185327afec7ca654eb64f29ebeca3808e87dd0b6ca"} Jan 28 11:40:21 crc kubenswrapper[4804]: I0128 11:40:21.677696 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pfzkj" event={"ID":"9d301959-ed06-4b22-8e97-f3fc9a9bc491","Type":"ContainerStarted","Data":"27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067"} Jan 28 11:40:21 crc kubenswrapper[4804]: I0128 11:40:21.677734 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pfzkj" event={"ID":"9d301959-ed06-4b22-8e97-f3fc9a9bc491","Type":"ContainerStarted","Data":"b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784"} Jan 28 11:40:21 crc kubenswrapper[4804]: I0128 11:40:21.677912 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:40:21 crc kubenswrapper[4804]: I0128 11:40:21.677969 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:40:21 crc kubenswrapper[4804]: I0128 11:40:21.719861 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-pfzkj" podStartSLOduration=20.670562666 podStartE2EDuration="29.719841973s" podCreationTimestamp="2026-01-28 11:39:52 +0000 UTC" firstStartedPulling="2026-01-28 11:40:08.970930802 +0000 UTC m=+1084.765810786" lastFinishedPulling="2026-01-28 11:40:18.020210099 +0000 UTC m=+1093.815090093" observedRunningTime="2026-01-28 11:40:21.712712146 +0000 UTC m=+1097.507592130" watchObservedRunningTime="2026-01-28 11:40:21.719841973 +0000 UTC m=+1097.514721957" Jan 28 11:40:22 crc kubenswrapper[4804]: I0128 11:40:22.686899 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" event={"ID":"2bf63c78-fb1d-4777-9643-0923cf3a4c57","Type":"ContainerStarted","Data":"a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300"} Jan 28 11:40:22 crc kubenswrapper[4804]: I0128 11:40:22.687718 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:40:22 crc kubenswrapper[4804]: I0128 11:40:22.688964 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c6c76352-2487-4098-bbee-579834052292","Type":"ContainerStarted","Data":"083be3913b9cea293776996ed70c579f5b987734d7d6618ce37907eb76d96885"} Jan 28 11:40:22 crc kubenswrapper[4804]: I0128 11:40:22.690770 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"50c4ac86-3241-4cd1-aa15-9a36b6be1e03","Type":"ContainerStarted","Data":"7501d75daa32f7ac9da494ff4510c6c7b84e72c6cd5d7a36b873ba97e31ca357"} Jan 28 11:40:22 crc kubenswrapper[4804]: I0128 11:40:22.754316 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" podStartSLOduration=3.2275082250000002 podStartE2EDuration="38.7542951s" podCreationTimestamp="2026-01-28 11:39:44 +0000 UTC" firstStartedPulling="2026-01-28 11:39:44.974657755 +0000 UTC m=+1060.769537749" lastFinishedPulling="2026-01-28 11:40:20.50144464 +0000 UTC m=+1096.296324624" observedRunningTime="2026-01-28 11:40:22.715987611 +0000 UTC m=+1098.510867605" watchObservedRunningTime="2026-01-28 11:40:22.7542951 +0000 UTC m=+1098.549175084" Jan 28 11:40:22 crc kubenswrapper[4804]: I0128 11:40:22.758269 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=17.408143301 podStartE2EDuration="29.758263577s" podCreationTimestamp="2026-01-28 11:39:53 +0000 UTC" firstStartedPulling="2026-01-28 11:40:10.085142599 +0000 UTC m=+1085.880022583" lastFinishedPulling="2026-01-28 11:40:22.435262875 +0000 UTC m=+1098.230142859" observedRunningTime="2026-01-28 11:40:22.749536129 +0000 UTC m=+1098.544416143" watchObservedRunningTime="2026-01-28 11:40:22.758263577 +0000 UTC m=+1098.553143561" Jan 28 11:40:22 crc kubenswrapper[4804]: I0128 11:40:22.776129 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=13.503354619 podStartE2EDuration="26.776108484s" podCreationTimestamp="2026-01-28 11:39:56 +0000 UTC" firstStartedPulling="2026-01-28 11:40:09.146693707 +0000 UTC m=+1084.941573681" lastFinishedPulling="2026-01-28 11:40:22.419447562 +0000 UTC m=+1098.214327546" observedRunningTime="2026-01-28 11:40:22.773729269 +0000 UTC m=+1098.568609273" watchObservedRunningTime="2026-01-28 11:40:22.776108484 +0000 UTC m=+1098.570988468" Jan 28 11:40:22 crc kubenswrapper[4804]: I0128 11:40:22.993208 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Jan 28 11:40:23 crc kubenswrapper[4804]: I0128 11:40:23.718473 4804 generic.go:334] "Generic (PLEG): container finished" podID="33d56e9c-416a-4816-81a7-8def89c20c8e" containerID="111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340" exitCode=0 Jan 28 11:40:23 crc kubenswrapper[4804]: I0128 11:40:23.718569 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"33d56e9c-416a-4816-81a7-8def89c20c8e","Type":"ContainerDied","Data":"111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340"} Jan 28 11:40:23 crc kubenswrapper[4804]: I0128 11:40:23.724540 4804 generic.go:334] "Generic (PLEG): container finished" podID="24549b02-2977-49ee-8f25-a6ed25e523d1" containerID="71511ac2cacaf27ae221597c51e8a13319dc222d2cd450901bd6db686f0e4b92" exitCode=0 Jan 28 11:40:23 crc kubenswrapper[4804]: I0128 11:40:23.725112 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"24549b02-2977-49ee-8f25-a6ed25e523d1","Type":"ContainerDied","Data":"71511ac2cacaf27ae221597c51e8a13319dc222d2cd450901bd6db686f0e4b92"} Jan 28 11:40:24 crc kubenswrapper[4804]: I0128 11:40:24.736210 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"33d56e9c-416a-4816-81a7-8def89c20c8e","Type":"ContainerStarted","Data":"5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361"} Jan 28 11:40:24 crc kubenswrapper[4804]: I0128 11:40:24.739145 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"24549b02-2977-49ee-8f25-a6ed25e523d1","Type":"ContainerStarted","Data":"351711c020c75334855ec428e2d1987910c3ce0fc9fe965d8ca2c554f8fb0ae9"} Jan 28 11:40:24 crc kubenswrapper[4804]: I0128 11:40:24.764841 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=30.338695669 podStartE2EDuration="39.764821772s" podCreationTimestamp="2026-01-28 11:39:45 +0000 UTC" firstStartedPulling="2026-01-28 11:40:08.592766375 +0000 UTC m=+1084.387646359" lastFinishedPulling="2026-01-28 11:40:18.018892478 +0000 UTC m=+1093.813772462" observedRunningTime="2026-01-28 11:40:24.762645573 +0000 UTC m=+1100.557525587" watchObservedRunningTime="2026-01-28 11:40:24.764821772 +0000 UTC m=+1100.559701756" Jan 28 11:40:24 crc kubenswrapper[4804]: I0128 11:40:24.804199 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=27.982887772 podStartE2EDuration="37.804180874s" podCreationTimestamp="2026-01-28 11:39:47 +0000 UTC" firstStartedPulling="2026-01-28 11:40:08.47953213 +0000 UTC m=+1084.274412114" lastFinishedPulling="2026-01-28 11:40:18.300825232 +0000 UTC m=+1094.095705216" observedRunningTime="2026-01-28 11:40:24.794957191 +0000 UTC m=+1100.589837205" watchObservedRunningTime="2026-01-28 11:40:24.804180874 +0000 UTC m=+1100.599060868" Jan 28 11:40:24 crc kubenswrapper[4804]: I0128 11:40:24.945182 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Jan 28 11:40:24 crc kubenswrapper[4804]: I0128 11:40:24.945296 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Jan 28 11:40:24 crc kubenswrapper[4804]: I0128 11:40:24.993159 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Jan 28 11:40:25 crc kubenswrapper[4804]: I0128 11:40:25.018917 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Jan 28 11:40:25 crc kubenswrapper[4804]: I0128 11:40:25.045284 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Jan 28 11:40:25 crc kubenswrapper[4804]: I0128 11:40:25.759746 4804 generic.go:334] "Generic (PLEG): container finished" podID="303230dd-ae75-4c0f-abb8-be1086a098c5" containerID="9962cb97657df6097e0e35c4405845c47ec7afff237904f29299e385372b8084" exitCode=0 Jan 28 11:40:25 crc kubenswrapper[4804]: I0128 11:40:25.759820 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" event={"ID":"303230dd-ae75-4c0f-abb8-be1086a098c5","Type":"ContainerDied","Data":"9962cb97657df6097e0e35c4405845c47ec7afff237904f29299e385372b8084"} Jan 28 11:40:25 crc kubenswrapper[4804]: I0128 11:40:25.806291 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Jan 28 11:40:25 crc kubenswrapper[4804]: I0128 11:40:25.814148 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.073348 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6pb25"] Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.106698 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p7wz6"] Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.108020 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.110100 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.133856 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p7wz6"] Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.148556 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.148701 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-config\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.148738 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.148794 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5l6b\" (UniqueName: \"kubernetes.io/projected/2a94ea74-636e-4cb7-803b-01e91be31160-kube-api-access-z5l6b\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.229192 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-gtg97"] Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.230881 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.234034 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.251214 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-config\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.251511 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.251818 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5l6b\" (UniqueName: \"kubernetes.io/projected/2a94ea74-636e-4cb7-803b-01e91be31160-kube-api-access-z5l6b\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.251966 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.253507 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-dns-svc\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.253649 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-config\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.254481 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-ovsdbserver-nb\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.256427 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-gtg97"] Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.283044 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5l6b\" (UniqueName: \"kubernetes.io/projected/2a94ea74-636e-4cb7-803b-01e91be31160-kube-api-access-z5l6b\") pod \"dnsmasq-dns-5bf47b49b7-p7wz6\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.310388 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.312461 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.315367 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.315678 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-jztfn" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.319485 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.323692 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.325617 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.340009 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xc7n9"] Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.340463 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" podUID="2bf63c78-fb1d-4777-9643-0923cf3a4c57" containerName="dnsmasq-dns" containerID="cri-o://a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300" gracePeriod=10 Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.354976 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovs-rundir\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.355399 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6kjd\" (UniqueName: \"kubernetes.io/projected/f7359aec-58b3-4254-8765-cdc131e5f912-kube-api-access-d6kjd\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.355446 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovn-rundir\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.356076 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7359aec-58b3-4254-8765-cdc131e5f912-config\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.356144 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-combined-ca-bundle\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.356181 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.387672 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-8554648995-vnmsg"] Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.396268 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.408857 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.410049 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vnmsg"] Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.425279 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460288 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460370 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-dns-svc\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460394 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-scripts\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460425 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460444 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-config\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460466 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovs-rundir\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460503 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6kjd\" (UniqueName: \"kubernetes.io/projected/f7359aec-58b3-4254-8765-cdc131e5f912-kube-api-access-d6kjd\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460523 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460546 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9q9hh\" (UniqueName: \"kubernetes.io/projected/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-kube-api-access-9q9hh\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460572 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460591 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovn-rundir\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460619 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7359aec-58b3-4254-8765-cdc131e5f912-config\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460643 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jjlz\" (UniqueName: \"kubernetes.io/projected/edcdd787-6628-49ee-abcf-0146c096f547-kube-api-access-6jjlz\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460672 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/edcdd787-6628-49ee-abcf-0146c096f547-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460690 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460708 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-combined-ca-bundle\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460730 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.460754 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-config\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.461253 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovs-rundir\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.463772 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovn-rundir\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.464401 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7359aec-58b3-4254-8765-cdc131e5f912-config\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.465740 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-combined-ca-bundle\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.468690 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.487964 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6kjd\" (UniqueName: \"kubernetes.io/projected/f7359aec-58b3-4254-8765-cdc131e5f912-kube-api-access-d6kjd\") pod \"ovn-controller-metrics-gtg97\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.547702 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562620 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/edcdd787-6628-49ee-abcf-0146c096f547-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562673 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562697 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562722 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-config\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562782 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-dns-svc\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562802 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-scripts\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562831 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562848 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-config\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562889 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562929 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9q9hh\" (UniqueName: \"kubernetes.io/projected/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-kube-api-access-9q9hh\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562952 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.562991 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6jjlz\" (UniqueName: \"kubernetes.io/projected/edcdd787-6628-49ee-abcf-0146c096f547-kube-api-access-6jjlz\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.563577 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/edcdd787-6628-49ee-abcf-0146c096f547-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.564318 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-nb\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.565102 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-config\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.565656 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-sb\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.565860 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-scripts\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.567631 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-dns-svc\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.567651 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-config\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.569755 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.571453 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.574634 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.589785 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9q9hh\" (UniqueName: \"kubernetes.io/projected/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-kube-api-access-9q9hh\") pod \"dnsmasq-dns-8554648995-vnmsg\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.590772 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jjlz\" (UniqueName: \"kubernetes.io/projected/edcdd787-6628-49ee-abcf-0146c096f547-kube-api-access-6jjlz\") pod \"ovn-northd-0\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.651355 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.775112 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.776864 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" event={"ID":"303230dd-ae75-4c0f-abb8-be1086a098c5","Type":"ContainerStarted","Data":"7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024"} Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.777057 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" podUID="303230dd-ae75-4c0f-abb8-be1086a098c5" containerName="dnsmasq-dns" containerID="cri-o://7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024" gracePeriod=10 Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.777365 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.803482 4804 generic.go:334] "Generic (PLEG): container finished" podID="2bf63c78-fb1d-4777-9643-0923cf3a4c57" containerID="a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300" exitCode=0 Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.804584 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.804722 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" event={"ID":"2bf63c78-fb1d-4777-9643-0923cf3a4c57","Type":"ContainerDied","Data":"a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300"} Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.804756 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-xc7n9" event={"ID":"2bf63c78-fb1d-4777-9643-0923cf3a4c57","Type":"ContainerDied","Data":"d7d8077111dc71deae67122e06d45da608d04892783ac52603e5acfd01f98f37"} Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.804774 4804 scope.go:117] "RemoveContainer" containerID="a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.836349 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.852739 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" podStartSLOduration=-9223371993.002058 podStartE2EDuration="43.852717545s" podCreationTimestamp="2026-01-28 11:39:43 +0000 UTC" firstStartedPulling="2026-01-28 11:39:44.206185014 +0000 UTC m=+1060.001064998" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:26.846074634 +0000 UTC m=+1102.640954608" watchObservedRunningTime="2026-01-28 11:40:26.852717545 +0000 UTC m=+1102.647597529" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.873148 4804 scope.go:117] "RemoveContainer" containerID="947d66683b5919a8fc79f8185327afec7ca654eb64f29ebeca3808e87dd0b6ca" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.891049 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-gtg97"] Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.901174 4804 scope.go:117] "RemoveContainer" containerID="a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300" Jan 28 11:40:26 crc kubenswrapper[4804]: E0128 11:40:26.901689 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300\": container with ID starting with a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300 not found: ID does not exist" containerID="a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.901722 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300"} err="failed to get container status \"a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300\": rpc error: code = NotFound desc = could not find container \"a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300\": container with ID starting with a3769dac503a4e5339c3ee9391d34b0d992f9c2b26bc3d2b3ddf9b6f6240e300 not found: ID does not exist" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.901748 4804 scope.go:117] "RemoveContainer" containerID="947d66683b5919a8fc79f8185327afec7ca654eb64f29ebeca3808e87dd0b6ca" Jan 28 11:40:26 crc kubenswrapper[4804]: E0128 11:40:26.902158 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"947d66683b5919a8fc79f8185327afec7ca654eb64f29ebeca3808e87dd0b6ca\": container with ID starting with 947d66683b5919a8fc79f8185327afec7ca654eb64f29ebeca3808e87dd0b6ca not found: ID does not exist" containerID="947d66683b5919a8fc79f8185327afec7ca654eb64f29ebeca3808e87dd0b6ca" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.902180 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"947d66683b5919a8fc79f8185327afec7ca654eb64f29ebeca3808e87dd0b6ca"} err="failed to get container status \"947d66683b5919a8fc79f8185327afec7ca654eb64f29ebeca3808e87dd0b6ca\": rpc error: code = NotFound desc = could not find container \"947d66683b5919a8fc79f8185327afec7ca654eb64f29ebeca3808e87dd0b6ca\": container with ID starting with 947d66683b5919a8fc79f8185327afec7ca654eb64f29ebeca3808e87dd0b6ca not found: ID does not exist" Jan 28 11:40:26 crc kubenswrapper[4804]: W0128 11:40:26.911707 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7359aec_58b3_4254_8765_cdc131e5f912.slice/crio-79be63495c588808e23a67b45c37537f9b6477c73ecd4b8dd566e47b4bed3b9d WatchSource:0}: Error finding container 79be63495c588808e23a67b45c37537f9b6477c73ecd4b8dd566e47b4bed3b9d: Status 404 returned error can't find the container with id 79be63495c588808e23a67b45c37537f9b6477c73ecd4b8dd566e47b4bed3b9d Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.973850 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdvcs\" (UniqueName: \"kubernetes.io/projected/2bf63c78-fb1d-4777-9643-0923cf3a4c57-kube-api-access-fdvcs\") pod \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.973950 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-config\") pod \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.974051 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-dns-svc\") pod \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\" (UID: \"2bf63c78-fb1d-4777-9643-0923cf3a4c57\") " Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.987534 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2bf63c78-fb1d-4777-9643-0923cf3a4c57-kube-api-access-fdvcs" (OuterVolumeSpecName: "kube-api-access-fdvcs") pod "2bf63c78-fb1d-4777-9643-0923cf3a4c57" (UID: "2bf63c78-fb1d-4777-9643-0923cf3a4c57"). InnerVolumeSpecName "kube-api-access-fdvcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:26 crc kubenswrapper[4804]: I0128 11:40:26.991836 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p7wz6"] Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.027514 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-config" (OuterVolumeSpecName: "config") pod "2bf63c78-fb1d-4777-9643-0923cf3a4c57" (UID: "2bf63c78-fb1d-4777-9643-0923cf3a4c57"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.037475 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2bf63c78-fb1d-4777-9643-0923cf3a4c57" (UID: "2bf63c78-fb1d-4777-9643-0923cf3a4c57"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.076716 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.076762 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2bf63c78-fb1d-4777-9643-0923cf3a4c57-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.076779 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdvcs\" (UniqueName: \"kubernetes.io/projected/2bf63c78-fb1d-4777-9643-0923cf3a4c57-kube-api-access-fdvcs\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.157424 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xc7n9"] Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.157779 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.157793 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.164780 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-xc7n9"] Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.197770 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.269798 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.318835 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vnmsg"] Jan 28 11:40:27 crc kubenswrapper[4804]: W0128 11:40:27.348059 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda31c7f4f_6e39_4542_b3f8_d5bfdcc0831c.slice/crio-1020242618b1dde6f3aaf71e5aba360d809d5638f6a17f243b95e504e340485b WatchSource:0}: Error finding container 1020242618b1dde6f3aaf71e5aba360d809d5638f6a17f243b95e504e340485b: Status 404 returned error can't find the container with id 1020242618b1dde6f3aaf71e5aba360d809d5638f6a17f243b95e504e340485b Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.383039 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8t9pp\" (UniqueName: \"kubernetes.io/projected/303230dd-ae75-4c0f-abb8-be1086a098c5-kube-api-access-8t9pp\") pod \"303230dd-ae75-4c0f-abb8-be1086a098c5\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.383088 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-config\") pod \"303230dd-ae75-4c0f-abb8-be1086a098c5\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.383395 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-dns-svc\") pod \"303230dd-ae75-4c0f-abb8-be1086a098c5\" (UID: \"303230dd-ae75-4c0f-abb8-be1086a098c5\") " Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.388987 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/303230dd-ae75-4c0f-abb8-be1086a098c5-kube-api-access-8t9pp" (OuterVolumeSpecName: "kube-api-access-8t9pp") pod "303230dd-ae75-4c0f-abb8-be1086a098c5" (UID: "303230dd-ae75-4c0f-abb8-be1086a098c5"). InnerVolumeSpecName "kube-api-access-8t9pp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.420329 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-config" (OuterVolumeSpecName: "config") pod "303230dd-ae75-4c0f-abb8-be1086a098c5" (UID: "303230dd-ae75-4c0f-abb8-be1086a098c5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.421011 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "303230dd-ae75-4c0f-abb8-be1086a098c5" (UID: "303230dd-ae75-4c0f-abb8-be1086a098c5"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.485780 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.486375 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8t9pp\" (UniqueName: \"kubernetes.io/projected/303230dd-ae75-4c0f-abb8-be1086a098c5-kube-api-access-8t9pp\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.486388 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/303230dd-ae75-4c0f-abb8-be1086a098c5-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.813070 4804 generic.go:334] "Generic (PLEG): container finished" podID="2a94ea74-636e-4cb7-803b-01e91be31160" containerID="c92ccc5c8eca397c98b59f1106d471b7d94082a12cd2d4d9713616ab1a0bae0c" exitCode=0 Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.813159 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" event={"ID":"2a94ea74-636e-4cb7-803b-01e91be31160","Type":"ContainerDied","Data":"c92ccc5c8eca397c98b59f1106d471b7d94082a12cd2d4d9713616ab1a0bae0c"} Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.813195 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" event={"ID":"2a94ea74-636e-4cb7-803b-01e91be31160","Type":"ContainerStarted","Data":"a7461c4eba1d22105afb8f1414a73b5899821b91e52e8b7869ad478250c3c188"} Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.815490 4804 generic.go:334] "Generic (PLEG): container finished" podID="a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" containerID="a99548645bbd8f2136f9f7fb1affc4d254741865c846f3d3f9116fc59fc1d178" exitCode=0 Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.815575 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vnmsg" event={"ID":"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c","Type":"ContainerDied","Data":"a99548645bbd8f2136f9f7fb1affc4d254741865c846f3d3f9116fc59fc1d178"} Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.815642 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vnmsg" event={"ID":"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c","Type":"ContainerStarted","Data":"1020242618b1dde6f3aaf71e5aba360d809d5638f6a17f243b95e504e340485b"} Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.819687 4804 generic.go:334] "Generic (PLEG): container finished" podID="303230dd-ae75-4c0f-abb8-be1086a098c5" containerID="7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024" exitCode=0 Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.819774 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" event={"ID":"303230dd-ae75-4c0f-abb8-be1086a098c5","Type":"ContainerDied","Data":"7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024"} Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.819778 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.819824 4804 scope.go:117] "RemoveContainer" containerID="7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.819807 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc8479f9-6pb25" event={"ID":"303230dd-ae75-4c0f-abb8-be1086a098c5","Type":"ContainerDied","Data":"fb24c3a897ceddd1a2c22ed7950667aa2df40c1a865bdacebfbaa2864376b059"} Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.821711 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gtg97" event={"ID":"f7359aec-58b3-4254-8765-cdc131e5f912","Type":"ContainerStarted","Data":"565156fe636372aa88e628b080b158f58c0e89d805ea93ee8a1f9e78b61b800b"} Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.821887 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gtg97" event={"ID":"f7359aec-58b3-4254-8765-cdc131e5f912","Type":"ContainerStarted","Data":"79be63495c588808e23a67b45c37537f9b6477c73ecd4b8dd566e47b4bed3b9d"} Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.827541 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"edcdd787-6628-49ee-abcf-0146c096f547","Type":"ContainerStarted","Data":"1c34e1e54f29019381489766526d85a7ed81f51d7a176f0cfb6db1161fa7dad8"} Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.866943 4804 scope.go:117] "RemoveContainer" containerID="9962cb97657df6097e0e35c4405845c47ec7afff237904f29299e385372b8084" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.899749 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-gtg97" podStartSLOduration=1.899724019 podStartE2EDuration="1.899724019s" podCreationTimestamp="2026-01-28 11:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:27.887452969 +0000 UTC m=+1103.682332953" watchObservedRunningTime="2026-01-28 11:40:27.899724019 +0000 UTC m=+1103.694604013" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.921400 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6pb25"] Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.927920 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc8479f9-6pb25"] Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.932657 4804 scope.go:117] "RemoveContainer" containerID="7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024" Jan 28 11:40:27 crc kubenswrapper[4804]: E0128 11:40:27.933090 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024\": container with ID starting with 7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024 not found: ID does not exist" containerID="7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.933194 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024"} err="failed to get container status \"7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024\": rpc error: code = NotFound desc = could not find container \"7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024\": container with ID starting with 7bf310b06fdb629fbcf9a613454e5a4c286f31861ac2e34ab2494fca1252a024 not found: ID does not exist" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.933222 4804 scope.go:117] "RemoveContainer" containerID="9962cb97657df6097e0e35c4405845c47ec7afff237904f29299e385372b8084" Jan 28 11:40:27 crc kubenswrapper[4804]: E0128 11:40:27.935248 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9962cb97657df6097e0e35c4405845c47ec7afff237904f29299e385372b8084\": container with ID starting with 9962cb97657df6097e0e35c4405845c47ec7afff237904f29299e385372b8084 not found: ID does not exist" containerID="9962cb97657df6097e0e35c4405845c47ec7afff237904f29299e385372b8084" Jan 28 11:40:27 crc kubenswrapper[4804]: I0128 11:40:27.935296 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9962cb97657df6097e0e35c4405845c47ec7afff237904f29299e385372b8084"} err="failed to get container status \"9962cb97657df6097e0e35c4405845c47ec7afff237904f29299e385372b8084\": rpc error: code = NotFound desc = could not find container \"9962cb97657df6097e0e35c4405845c47ec7afff237904f29299e385372b8084\": container with ID starting with 9962cb97657df6097e0e35c4405845c47ec7afff237904f29299e385372b8084 not found: ID does not exist" Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.467423 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.467765 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.829867 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.835956 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"edcdd787-6628-49ee-abcf-0146c096f547","Type":"ContainerStarted","Data":"1f6db044032b9ea275036a4c598039837713d6af1c8b750e39682cd377aa7e00"} Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.837405 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" event={"ID":"2a94ea74-636e-4cb7-803b-01e91be31160","Type":"ContainerStarted","Data":"8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf"} Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.838150 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.840104 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vnmsg" event={"ID":"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c","Type":"ContainerStarted","Data":"5418c7b289f056b4f05b6a342643efaf91956352be4b6ee33c7e9e02353ffd7b"} Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.840652 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.870363 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-8554648995-vnmsg" podStartSLOduration=2.870343061 podStartE2EDuration="2.870343061s" podCreationTimestamp="2026-01-28 11:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:28.865815937 +0000 UTC m=+1104.660695931" watchObservedRunningTime="2026-01-28 11:40:28.870343061 +0000 UTC m=+1104.665223055" Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.886711 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" podStartSLOduration=2.886689062 podStartE2EDuration="2.886689062s" podCreationTimestamp="2026-01-28 11:40:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:28.880089642 +0000 UTC m=+1104.674969626" watchObservedRunningTime="2026-01-28 11:40:28.886689062 +0000 UTC m=+1104.681569066" Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.926770 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2bf63c78-fb1d-4777-9643-0923cf3a4c57" path="/var/lib/kubelet/pods/2bf63c78-fb1d-4777-9643-0923cf3a4c57/volumes" Jan 28 11:40:28 crc kubenswrapper[4804]: I0128 11:40:28.927523 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="303230dd-ae75-4c0f-abb8-be1086a098c5" path="/var/lib/kubelet/pods/303230dd-ae75-4c0f-abb8-be1086a098c5/volumes" Jan 28 11:40:29 crc kubenswrapper[4804]: I0128 11:40:29.540019 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Jan 28 11:40:29 crc kubenswrapper[4804]: I0128 11:40:29.633478 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Jan 28 11:40:29 crc kubenswrapper[4804]: I0128 11:40:29.850796 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"edcdd787-6628-49ee-abcf-0146c096f547","Type":"ContainerStarted","Data":"17400e5f10254b0d771acc135458ad1381f04acdf3cc5817d31b6d3932b519f1"} Jan 28 11:40:29 crc kubenswrapper[4804]: I0128 11:40:29.889998 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=2.53223908 podStartE2EDuration="3.889983564s" podCreationTimestamp="2026-01-28 11:40:26 +0000 UTC" firstStartedPulling="2026-01-28 11:40:27.209922974 +0000 UTC m=+1103.004802958" lastFinishedPulling="2026-01-28 11:40:28.567667458 +0000 UTC m=+1104.362547442" observedRunningTime="2026-01-28 11:40:29.888006542 +0000 UTC m=+1105.682886516" watchObservedRunningTime="2026-01-28 11:40:29.889983564 +0000 UTC m=+1105.684863548" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.584104 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p7wz6"] Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.626090 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-smhkb"] Jan 28 11:40:30 crc kubenswrapper[4804]: E0128 11:40:30.626388 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="303230dd-ae75-4c0f-abb8-be1086a098c5" containerName="dnsmasq-dns" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.626403 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="303230dd-ae75-4c0f-abb8-be1086a098c5" containerName="dnsmasq-dns" Jan 28 11:40:30 crc kubenswrapper[4804]: E0128 11:40:30.626430 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bf63c78-fb1d-4777-9643-0923cf3a4c57" containerName="init" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.626436 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bf63c78-fb1d-4777-9643-0923cf3a4c57" containerName="init" Jan 28 11:40:30 crc kubenswrapper[4804]: E0128 11:40:30.626454 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2bf63c78-fb1d-4777-9643-0923cf3a4c57" containerName="dnsmasq-dns" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.626461 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="2bf63c78-fb1d-4777-9643-0923cf3a4c57" containerName="dnsmasq-dns" Jan 28 11:40:30 crc kubenswrapper[4804]: E0128 11:40:30.626474 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="303230dd-ae75-4c0f-abb8-be1086a098c5" containerName="init" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.626479 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="303230dd-ae75-4c0f-abb8-be1086a098c5" containerName="init" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.626663 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="303230dd-ae75-4c0f-abb8-be1086a098c5" containerName="dnsmasq-dns" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.626682 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="2bf63c78-fb1d-4777-9643-0923cf3a4c57" containerName="dnsmasq-dns" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.627446 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.644542 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-smhkb"] Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.655770 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.751944 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.752002 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.752053 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-config\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.752120 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.752302 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xzbk7\" (UniqueName: \"kubernetes.io/projected/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-kube-api-access-xzbk7\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.832493 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.853683 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-config\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.853783 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.853846 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xzbk7\" (UniqueName: \"kubernetes.io/projected/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-kube-api-access-xzbk7\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.853938 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.853965 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.855057 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-nb\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.856297 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-config\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.856819 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-dns-svc\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.856956 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-sb\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.860543 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.890839 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xzbk7\" (UniqueName: \"kubernetes.io/projected/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-kube-api-access-xzbk7\") pod \"dnsmasq-dns-b8fbc5445-smhkb\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.938737 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Jan 28 11:40:30 crc kubenswrapper[4804]: I0128 11:40:30.945609 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.461326 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-smhkb"] Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.705397 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-storage-0"] Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.714149 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.717630 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-files" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.721205 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-storage-config-data" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.721244 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-conf" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.721286 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-swift-dockercfg-z6brz" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.732091 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.772777 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-lock\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.772829 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2q8t\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-kube-api-access-t2q8t\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.773014 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.773072 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-cache\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.773158 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.773373 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.871302 4804 generic.go:334] "Generic (PLEG): container finished" podID="b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" containerID="cc41ce863945bdc29f63769a99ae0d6dadc7d7ef12a25abcef8a64fe330fdd73" exitCode=0 Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.871412 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" event={"ID":"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1","Type":"ContainerDied","Data":"cc41ce863945bdc29f63769a99ae0d6dadc7d7ef12a25abcef8a64fe330fdd73"} Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.871461 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" event={"ID":"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1","Type":"ContainerStarted","Data":"99851c0d89d123f60d87fe5e7b4fa11b90a206a967c2a2ccd24c03d723ee66ce"} Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.871928 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" podUID="2a94ea74-636e-4cb7-803b-01e91be31160" containerName="dnsmasq-dns" containerID="cri-o://8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf" gracePeriod=10 Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.875863 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.876011 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-lock\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.876085 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2q8t\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-kube-api-access-t2q8t\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.876213 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.876290 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-cache\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.876352 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.876512 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.877848 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-cache\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.878130 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-lock\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: E0128 11:40:31.880449 4804 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 11:40:31 crc kubenswrapper[4804]: E0128 11:40:31.880480 4804 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 11:40:31 crc kubenswrapper[4804]: E0128 11:40:31.880528 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift podName:f452e749-06e2-4b9c-a4d7-8a63ccd07cfc nodeName:}" failed. No retries permitted until 2026-01-28 11:40:32.380511778 +0000 UTC m=+1108.175391772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift") pod "swift-storage-0" (UID: "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc") : configmap "swift-ring-files" not found Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.882813 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-combined-ca-bundle\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.913600 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2q8t\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-kube-api-access-t2q8t\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:31 crc kubenswrapper[4804]: I0128 11:40:31.915554 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.227143 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-ring-rebalance-jxgc9"] Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.229175 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.235137 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-config-data" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.236211 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.249353 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jxgc9"] Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.272011 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"swift-ring-scripts" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.294133 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-dispersionconf\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.294176 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmwrr\" (UniqueName: \"kubernetes.io/projected/cb46a04b-0e73-46fb-bcdf-a670c30d5531-kube-api-access-gmwrr\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.294322 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-combined-ca-bundle\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.294388 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-scripts\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.294552 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-swiftconf\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.294702 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb46a04b-0e73-46fb-bcdf-a670c30d5531-etc-swift\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.294840 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-ring-data-devices\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.324763 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.402240 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5l6b\" (UniqueName: \"kubernetes.io/projected/2a94ea74-636e-4cb7-803b-01e91be31160-kube-api-access-z5l6b\") pod \"2a94ea74-636e-4cb7-803b-01e91be31160\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.402621 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-ovsdbserver-nb\") pod \"2a94ea74-636e-4cb7-803b-01e91be31160\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.402783 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-dns-svc\") pod \"2a94ea74-636e-4cb7-803b-01e91be31160\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.403210 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-config\") pod \"2a94ea74-636e-4cb7-803b-01e91be31160\" (UID: \"2a94ea74-636e-4cb7-803b-01e91be31160\") " Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.406273 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-combined-ca-bundle\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.406352 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-scripts\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.406541 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-swiftconf\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.406636 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb46a04b-0e73-46fb-bcdf-a670c30d5531-etc-swift\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.406716 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-ring-data-devices\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.407613 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-ring-data-devices\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.407848 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-dispersionconf\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.407888 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gmwrr\" (UniqueName: \"kubernetes.io/projected/cb46a04b-0e73-46fb-bcdf-a670c30d5531-kube-api-access-gmwrr\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.408411 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb46a04b-0e73-46fb-bcdf-a670c30d5531-etc-swift\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.408562 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a94ea74-636e-4cb7-803b-01e91be31160-kube-api-access-z5l6b" (OuterVolumeSpecName: "kube-api-access-z5l6b") pod "2a94ea74-636e-4cb7-803b-01e91be31160" (UID: "2a94ea74-636e-4cb7-803b-01e91be31160"). InnerVolumeSpecName "kube-api-access-z5l6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.408869 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-scripts\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.410982 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:32 crc kubenswrapper[4804]: E0128 11:40:32.412229 4804 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 11:40:32 crc kubenswrapper[4804]: E0128 11:40:32.412260 4804 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 11:40:32 crc kubenswrapper[4804]: E0128 11:40:32.412341 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift podName:f452e749-06e2-4b9c-a4d7-8a63ccd07cfc nodeName:}" failed. No retries permitted until 2026-01-28 11:40:33.412317545 +0000 UTC m=+1109.207197609 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift") pod "swift-storage-0" (UID: "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc") : configmap "swift-ring-files" not found Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.413253 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-dispersionconf\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.414432 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-swiftconf\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.420670 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-combined-ca-bundle\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.436819 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gmwrr\" (UniqueName: \"kubernetes.io/projected/cb46a04b-0e73-46fb-bcdf-a670c30d5531-kube-api-access-gmwrr\") pod \"swift-ring-rebalance-jxgc9\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.454657 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2a94ea74-636e-4cb7-803b-01e91be31160" (UID: "2a94ea74-636e-4cb7-803b-01e91be31160"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.464760 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2a94ea74-636e-4cb7-803b-01e91be31160" (UID: "2a94ea74-636e-4cb7-803b-01e91be31160"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.477153 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-config" (OuterVolumeSpecName: "config") pod "2a94ea74-636e-4cb7-803b-01e91be31160" (UID: "2a94ea74-636e-4cb7-803b-01e91be31160"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.512550 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.512901 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.512987 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2a94ea74-636e-4cb7-803b-01e91be31160-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.513171 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5l6b\" (UniqueName: \"kubernetes.io/projected/2a94ea74-636e-4cb7-803b-01e91be31160-kube-api-access-z5l6b\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.556259 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.883041 4804 generic.go:334] "Generic (PLEG): container finished" podID="2a94ea74-636e-4cb7-803b-01e91be31160" containerID="8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf" exitCode=0 Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.883199 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.883183 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" event={"ID":"2a94ea74-636e-4cb7-803b-01e91be31160","Type":"ContainerDied","Data":"8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf"} Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.886893 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5bf47b49b7-p7wz6" event={"ID":"2a94ea74-636e-4cb7-803b-01e91be31160","Type":"ContainerDied","Data":"a7461c4eba1d22105afb8f1414a73b5899821b91e52e8b7869ad478250c3c188"} Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.886916 4804 scope.go:117] "RemoveContainer" containerID="8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.893600 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" event={"ID":"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1","Type":"ContainerStarted","Data":"a48f67c8adf2aa181768d9a9401b24e93ffd4b8affd530951dafed718efcc454"} Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.893896 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.923489 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" podStartSLOduration=2.923468703 podStartE2EDuration="2.923468703s" podCreationTimestamp="2026-01-28 11:40:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:32.919790306 +0000 UTC m=+1108.714670290" watchObservedRunningTime="2026-01-28 11:40:32.923468703 +0000 UTC m=+1108.718348687" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.957109 4804 scope.go:117] "RemoveContainer" containerID="c92ccc5c8eca397c98b59f1106d471b7d94082a12cd2d4d9713616ab1a0bae0c" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.968692 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p7wz6"] Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.981791 4804 scope.go:117] "RemoveContainer" containerID="8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf" Jan 28 11:40:32 crc kubenswrapper[4804]: E0128 11:40:32.982386 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf\": container with ID starting with 8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf not found: ID does not exist" containerID="8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.982423 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf"} err="failed to get container status \"8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf\": rpc error: code = NotFound desc = could not find container \"8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf\": container with ID starting with 8cab69521006c086550d27fac5b14a3f36d48ac62523186340f45f6fe401bfbf not found: ID does not exist" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.982449 4804 scope.go:117] "RemoveContainer" containerID="c92ccc5c8eca397c98b59f1106d471b7d94082a12cd2d4d9713616ab1a0bae0c" Jan 28 11:40:32 crc kubenswrapper[4804]: E0128 11:40:32.982760 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c92ccc5c8eca397c98b59f1106d471b7d94082a12cd2d4d9713616ab1a0bae0c\": container with ID starting with c92ccc5c8eca397c98b59f1106d471b7d94082a12cd2d4d9713616ab1a0bae0c not found: ID does not exist" containerID="c92ccc5c8eca397c98b59f1106d471b7d94082a12cd2d4d9713616ab1a0bae0c" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.982786 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c92ccc5c8eca397c98b59f1106d471b7d94082a12cd2d4d9713616ab1a0bae0c"} err="failed to get container status \"c92ccc5c8eca397c98b59f1106d471b7d94082a12cd2d4d9713616ab1a0bae0c\": rpc error: code = NotFound desc = could not find container \"c92ccc5c8eca397c98b59f1106d471b7d94082a12cd2d4d9713616ab1a0bae0c\": container with ID starting with c92ccc5c8eca397c98b59f1106d471b7d94082a12cd2d4d9713616ab1a0bae0c not found: ID does not exist" Jan 28 11:40:32 crc kubenswrapper[4804]: I0128 11:40:32.985389 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5bf47b49b7-p7wz6"] Jan 28 11:40:33 crc kubenswrapper[4804]: W0128 11:40:33.063562 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb46a04b_0e73_46fb_bcdf_a670c30d5531.slice/crio-54320a0848015693cbe26d3d7b9ea2f77d1e8b3b63d64b6c373c37c52c3bf565 WatchSource:0}: Error finding container 54320a0848015693cbe26d3d7b9ea2f77d1e8b3b63d64b6c373c37c52c3bf565: Status 404 returned error can't find the container with id 54320a0848015693cbe26d3d7b9ea2f77d1e8b3b63d64b6c373c37c52c3bf565 Jan 28 11:40:33 crc kubenswrapper[4804]: I0128 11:40:33.064858 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-ring-rebalance-jxgc9"] Jan 28 11:40:33 crc kubenswrapper[4804]: I0128 11:40:33.430361 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:33 crc kubenswrapper[4804]: E0128 11:40:33.430636 4804 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 11:40:33 crc kubenswrapper[4804]: E0128 11:40:33.430675 4804 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 11:40:33 crc kubenswrapper[4804]: E0128 11:40:33.430750 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift podName:f452e749-06e2-4b9c-a4d7-8a63ccd07cfc nodeName:}" failed. No retries permitted until 2026-01-28 11:40:35.430725149 +0000 UTC m=+1111.225605133 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift") pod "swift-storage-0" (UID: "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc") : configmap "swift-ring-files" not found Jan 28 11:40:33 crc kubenswrapper[4804]: I0128 11:40:33.902616 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jxgc9" event={"ID":"cb46a04b-0e73-46fb-bcdf-a670c30d5531","Type":"ContainerStarted","Data":"54320a0848015693cbe26d3d7b9ea2f77d1e8b3b63d64b6c373c37c52c3bf565"} Jan 28 11:40:34 crc kubenswrapper[4804]: I0128 11:40:34.927568 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a94ea74-636e-4cb7-803b-01e91be31160" path="/var/lib/kubelet/pods/2a94ea74-636e-4cb7-803b-01e91be31160/volumes" Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.469751 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:35 crc kubenswrapper[4804]: E0128 11:40:35.470087 4804 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 11:40:35 crc kubenswrapper[4804]: E0128 11:40:35.470115 4804 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 11:40:35 crc kubenswrapper[4804]: E0128 11:40:35.470229 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift podName:f452e749-06e2-4b9c-a4d7-8a63ccd07cfc nodeName:}" failed. No retries permitted until 2026-01-28 11:40:39.47018271 +0000 UTC m=+1115.265062694 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift") pod "swift-storage-0" (UID: "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc") : configmap "swift-ring-files" not found Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.850852 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-64l8r"] Jan 28 11:40:35 crc kubenswrapper[4804]: E0128 11:40:35.851340 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a94ea74-636e-4cb7-803b-01e91be31160" containerName="dnsmasq-dns" Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.851367 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a94ea74-636e-4cb7-803b-01e91be31160" containerName="dnsmasq-dns" Jan 28 11:40:35 crc kubenswrapper[4804]: E0128 11:40:35.851407 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a94ea74-636e-4cb7-803b-01e91be31160" containerName="init" Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.851416 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a94ea74-636e-4cb7-803b-01e91be31160" containerName="init" Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.851673 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a94ea74-636e-4cb7-803b-01e91be31160" containerName="dnsmasq-dns" Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.852562 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-64l8r" Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.854827 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.863297 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-64l8r"] Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.881045 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxf54\" (UniqueName: \"kubernetes.io/projected/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-kube-api-access-mxf54\") pod \"root-account-create-update-64l8r\" (UID: \"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1\") " pod="openstack/root-account-create-update-64l8r" Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.881136 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-operator-scripts\") pod \"root-account-create-update-64l8r\" (UID: \"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1\") " pod="openstack/root-account-create-update-64l8r" Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.983652 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mxf54\" (UniqueName: \"kubernetes.io/projected/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-kube-api-access-mxf54\") pod \"root-account-create-update-64l8r\" (UID: \"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1\") " pod="openstack/root-account-create-update-64l8r" Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.983732 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-operator-scripts\") pod \"root-account-create-update-64l8r\" (UID: \"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1\") " pod="openstack/root-account-create-update-64l8r" Jan 28 11:40:35 crc kubenswrapper[4804]: I0128 11:40:35.984670 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-operator-scripts\") pod \"root-account-create-update-64l8r\" (UID: \"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1\") " pod="openstack/root-account-create-update-64l8r" Jan 28 11:40:36 crc kubenswrapper[4804]: I0128 11:40:36.012542 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxf54\" (UniqueName: \"kubernetes.io/projected/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-kube-api-access-mxf54\") pod \"root-account-create-update-64l8r\" (UID: \"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1\") " pod="openstack/root-account-create-update-64l8r" Jan 28 11:40:36 crc kubenswrapper[4804]: I0128 11:40:36.182898 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-64l8r" Jan 28 11:40:36 crc kubenswrapper[4804]: I0128 11:40:36.841112 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:37 crc kubenswrapper[4804]: I0128 11:40:37.061205 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-64l8r"] Jan 28 11:40:37 crc kubenswrapper[4804]: I0128 11:40:37.944602 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jxgc9" event={"ID":"cb46a04b-0e73-46fb-bcdf-a670c30d5531","Type":"ContainerStarted","Data":"acc629a29baa94b90886caa052a9712308190fcbd858f031b8ca85b990fe85e5"} Jan 28 11:40:37 crc kubenswrapper[4804]: I0128 11:40:37.946980 4804 generic.go:334] "Generic (PLEG): container finished" podID="b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1" containerID="647b1f190be0e34804a1719e55a8c2587f822eeb47af8070a4c99ed681d8f789" exitCode=0 Jan 28 11:40:37 crc kubenswrapper[4804]: I0128 11:40:37.947015 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-64l8r" event={"ID":"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1","Type":"ContainerDied","Data":"647b1f190be0e34804a1719e55a8c2587f822eeb47af8070a4c99ed681d8f789"} Jan 28 11:40:37 crc kubenswrapper[4804]: I0128 11:40:37.947034 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-64l8r" event={"ID":"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1","Type":"ContainerStarted","Data":"14ca1244796137d0c6b3dfdc5bf8667213bd1467f526fa625705496eede10232"} Jan 28 11:40:37 crc kubenswrapper[4804]: I0128 11:40:37.978320 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-ring-rebalance-jxgc9" podStartSLOduration=2.214823764 podStartE2EDuration="5.978269837s" podCreationTimestamp="2026-01-28 11:40:32 +0000 UTC" firstStartedPulling="2026-01-28 11:40:33.065620278 +0000 UTC m=+1108.860500262" lastFinishedPulling="2026-01-28 11:40:36.829066351 +0000 UTC m=+1112.623946335" observedRunningTime="2026-01-28 11:40:37.966587645 +0000 UTC m=+1113.761467629" watchObservedRunningTime="2026-01-28 11:40:37.978269837 +0000 UTC m=+1113.773149841" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.389525 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-5t7jn"] Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.390940 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5t7jn" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.397145 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5t7jn"] Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.441148 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54fa6273-e08e-4dbb-a86b-a8951e4100fa-operator-scripts\") pod \"keystone-db-create-5t7jn\" (UID: \"54fa6273-e08e-4dbb-a86b-a8951e4100fa\") " pod="openstack/keystone-db-create-5t7jn" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.441216 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhpch\" (UniqueName: \"kubernetes.io/projected/54fa6273-e08e-4dbb-a86b-a8951e4100fa-kube-api-access-mhpch\") pod \"keystone-db-create-5t7jn\" (UID: \"54fa6273-e08e-4dbb-a86b-a8951e4100fa\") " pod="openstack/keystone-db-create-5t7jn" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.542772 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54fa6273-e08e-4dbb-a86b-a8951e4100fa-operator-scripts\") pod \"keystone-db-create-5t7jn\" (UID: \"54fa6273-e08e-4dbb-a86b-a8951e4100fa\") " pod="openstack/keystone-db-create-5t7jn" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.542825 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhpch\" (UniqueName: \"kubernetes.io/projected/54fa6273-e08e-4dbb-a86b-a8951e4100fa-kube-api-access-mhpch\") pod \"keystone-db-create-5t7jn\" (UID: \"54fa6273-e08e-4dbb-a86b-a8951e4100fa\") " pod="openstack/keystone-db-create-5t7jn" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.543936 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54fa6273-e08e-4dbb-a86b-a8951e4100fa-operator-scripts\") pod \"keystone-db-create-5t7jn\" (UID: \"54fa6273-e08e-4dbb-a86b-a8951e4100fa\") " pod="openstack/keystone-db-create-5t7jn" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.553396 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-f8f4-account-create-update-mg2gd"] Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.555534 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f8f4-account-create-update-mg2gd" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.559507 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.593617 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhpch\" (UniqueName: \"kubernetes.io/projected/54fa6273-e08e-4dbb-a86b-a8951e4100fa-kube-api-access-mhpch\") pod \"keystone-db-create-5t7jn\" (UID: \"54fa6273-e08e-4dbb-a86b-a8951e4100fa\") " pod="openstack/keystone-db-create-5t7jn" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.597381 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f8f4-account-create-update-mg2gd"] Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.644690 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4586997-59ed-4e13-b7ec-3146711f642c-operator-scripts\") pod \"keystone-f8f4-account-create-update-mg2gd\" (UID: \"a4586997-59ed-4e13-b7ec-3146711f642c\") " pod="openstack/keystone-f8f4-account-create-update-mg2gd" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.644742 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x26qf\" (UniqueName: \"kubernetes.io/projected/a4586997-59ed-4e13-b7ec-3146711f642c-kube-api-access-x26qf\") pod \"keystone-f8f4-account-create-update-mg2gd\" (UID: \"a4586997-59ed-4e13-b7ec-3146711f642c\") " pod="openstack/keystone-f8f4-account-create-update-mg2gd" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.712383 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-zvgmg"] Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.714010 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zvgmg" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.715776 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5t7jn" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.726343 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zvgmg"] Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.746243 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b1029fc-e131-4d00-b538-6f0a17674c75-operator-scripts\") pod \"placement-db-create-zvgmg\" (UID: \"8b1029fc-e131-4d00-b538-6f0a17674c75\") " pod="openstack/placement-db-create-zvgmg" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.746402 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g97lw\" (UniqueName: \"kubernetes.io/projected/8b1029fc-e131-4d00-b538-6f0a17674c75-kube-api-access-g97lw\") pod \"placement-db-create-zvgmg\" (UID: \"8b1029fc-e131-4d00-b538-6f0a17674c75\") " pod="openstack/placement-db-create-zvgmg" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.746440 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4586997-59ed-4e13-b7ec-3146711f642c-operator-scripts\") pod \"keystone-f8f4-account-create-update-mg2gd\" (UID: \"a4586997-59ed-4e13-b7ec-3146711f642c\") " pod="openstack/keystone-f8f4-account-create-update-mg2gd" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.747417 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x26qf\" (UniqueName: \"kubernetes.io/projected/a4586997-59ed-4e13-b7ec-3146711f642c-kube-api-access-x26qf\") pod \"keystone-f8f4-account-create-update-mg2gd\" (UID: \"a4586997-59ed-4e13-b7ec-3146711f642c\") " pod="openstack/keystone-f8f4-account-create-update-mg2gd" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.747342 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4586997-59ed-4e13-b7ec-3146711f642c-operator-scripts\") pod \"keystone-f8f4-account-create-update-mg2gd\" (UID: \"a4586997-59ed-4e13-b7ec-3146711f642c\") " pod="openstack/keystone-f8f4-account-create-update-mg2gd" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.769170 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x26qf\" (UniqueName: \"kubernetes.io/projected/a4586997-59ed-4e13-b7ec-3146711f642c-kube-api-access-x26qf\") pod \"keystone-f8f4-account-create-update-mg2gd\" (UID: \"a4586997-59ed-4e13-b7ec-3146711f642c\") " pod="openstack/keystone-f8f4-account-create-update-mg2gd" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.826765 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-ea29-account-create-update-fd9sb"] Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.829952 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ea29-account-create-update-fd9sb" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.834762 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.848993 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b1029fc-e131-4d00-b538-6f0a17674c75-operator-scripts\") pod \"placement-db-create-zvgmg\" (UID: \"8b1029fc-e131-4d00-b538-6f0a17674c75\") " pod="openstack/placement-db-create-zvgmg" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.849148 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g97lw\" (UniqueName: \"kubernetes.io/projected/8b1029fc-e131-4d00-b538-6f0a17674c75-kube-api-access-g97lw\") pod \"placement-db-create-zvgmg\" (UID: \"8b1029fc-e131-4d00-b538-6f0a17674c75\") " pod="openstack/placement-db-create-zvgmg" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.852239 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b1029fc-e131-4d00-b538-6f0a17674c75-operator-scripts\") pod \"placement-db-create-zvgmg\" (UID: \"8b1029fc-e131-4d00-b538-6f0a17674c75\") " pod="openstack/placement-db-create-zvgmg" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.867083 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ea29-account-create-update-fd9sb"] Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.877545 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g97lw\" (UniqueName: \"kubernetes.io/projected/8b1029fc-e131-4d00-b538-6f0a17674c75-kube-api-access-g97lw\") pod \"placement-db-create-zvgmg\" (UID: \"8b1029fc-e131-4d00-b538-6f0a17674c75\") " pod="openstack/placement-db-create-zvgmg" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.926662 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f8f4-account-create-update-mg2gd" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.951097 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7k2hk\" (UniqueName: \"kubernetes.io/projected/08795da4-549f-437a-9113-51d1003b5668-kube-api-access-7k2hk\") pod \"placement-ea29-account-create-update-fd9sb\" (UID: \"08795da4-549f-437a-9113-51d1003b5668\") " pod="openstack/placement-ea29-account-create-update-fd9sb" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.951425 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08795da4-549f-437a-9113-51d1003b5668-operator-scripts\") pod \"placement-ea29-account-create-update-fd9sb\" (UID: \"08795da4-549f-437a-9113-51d1003b5668\") " pod="openstack/placement-ea29-account-create-update-fd9sb" Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.996070 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-vmdbt"] Jan 28 11:40:38 crc kubenswrapper[4804]: I0128 11:40:38.997360 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vmdbt" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.016185 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vmdbt"] Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.033369 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zvgmg" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.060173 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-297x5\" (UniqueName: \"kubernetes.io/projected/903b6b99-b94d-428a-9c9c-7465ef27ad40-kube-api-access-297x5\") pod \"glance-db-create-vmdbt\" (UID: \"903b6b99-b94d-428a-9c9c-7465ef27ad40\") " pod="openstack/glance-db-create-vmdbt" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.060246 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08795da4-549f-437a-9113-51d1003b5668-operator-scripts\") pod \"placement-ea29-account-create-update-fd9sb\" (UID: \"08795da4-549f-437a-9113-51d1003b5668\") " pod="openstack/placement-ea29-account-create-update-fd9sb" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.060374 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7k2hk\" (UniqueName: \"kubernetes.io/projected/08795da4-549f-437a-9113-51d1003b5668-kube-api-access-7k2hk\") pod \"placement-ea29-account-create-update-fd9sb\" (UID: \"08795da4-549f-437a-9113-51d1003b5668\") " pod="openstack/placement-ea29-account-create-update-fd9sb" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.060409 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/903b6b99-b94d-428a-9c9c-7465ef27ad40-operator-scripts\") pod \"glance-db-create-vmdbt\" (UID: \"903b6b99-b94d-428a-9c9c-7465ef27ad40\") " pod="openstack/glance-db-create-vmdbt" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.061473 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08795da4-549f-437a-9113-51d1003b5668-operator-scripts\") pod \"placement-ea29-account-create-update-fd9sb\" (UID: \"08795da4-549f-437a-9113-51d1003b5668\") " pod="openstack/placement-ea29-account-create-update-fd9sb" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.085639 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7k2hk\" (UniqueName: \"kubernetes.io/projected/08795da4-549f-437a-9113-51d1003b5668-kube-api-access-7k2hk\") pod \"placement-ea29-account-create-update-fd9sb\" (UID: \"08795da4-549f-437a-9113-51d1003b5668\") " pod="openstack/placement-ea29-account-create-update-fd9sb" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.118943 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-ec8f-account-create-update-wm9f2"] Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.120335 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ec8f-account-create-update-wm9f2" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.123183 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.130570 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ec8f-account-create-update-wm9f2"] Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.149478 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ea29-account-create-update-fd9sb" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.161632 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38148c07-9662-4f0b-8285-a02633a7cd37-operator-scripts\") pod \"glance-ec8f-account-create-update-wm9f2\" (UID: \"38148c07-9662-4f0b-8285-a02633a7cd37\") " pod="openstack/glance-ec8f-account-create-update-wm9f2" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.161749 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/903b6b99-b94d-428a-9c9c-7465ef27ad40-operator-scripts\") pod \"glance-db-create-vmdbt\" (UID: \"903b6b99-b94d-428a-9c9c-7465ef27ad40\") " pod="openstack/glance-db-create-vmdbt" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.161786 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnz5l\" (UniqueName: \"kubernetes.io/projected/38148c07-9662-4f0b-8285-a02633a7cd37-kube-api-access-rnz5l\") pod \"glance-ec8f-account-create-update-wm9f2\" (UID: \"38148c07-9662-4f0b-8285-a02633a7cd37\") " pod="openstack/glance-ec8f-account-create-update-wm9f2" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.161871 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-297x5\" (UniqueName: \"kubernetes.io/projected/903b6b99-b94d-428a-9c9c-7465ef27ad40-kube-api-access-297x5\") pod \"glance-db-create-vmdbt\" (UID: \"903b6b99-b94d-428a-9c9c-7465ef27ad40\") " pod="openstack/glance-db-create-vmdbt" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.162919 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/903b6b99-b94d-428a-9c9c-7465ef27ad40-operator-scripts\") pod \"glance-db-create-vmdbt\" (UID: \"903b6b99-b94d-428a-9c9c-7465ef27ad40\") " pod="openstack/glance-db-create-vmdbt" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.181523 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-297x5\" (UniqueName: \"kubernetes.io/projected/903b6b99-b94d-428a-9c9c-7465ef27ad40-kube-api-access-297x5\") pod \"glance-db-create-vmdbt\" (UID: \"903b6b99-b94d-428a-9c9c-7465ef27ad40\") " pod="openstack/glance-db-create-vmdbt" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.264252 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnz5l\" (UniqueName: \"kubernetes.io/projected/38148c07-9662-4f0b-8285-a02633a7cd37-kube-api-access-rnz5l\") pod \"glance-ec8f-account-create-update-wm9f2\" (UID: \"38148c07-9662-4f0b-8285-a02633a7cd37\") " pod="openstack/glance-ec8f-account-create-update-wm9f2" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.264399 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38148c07-9662-4f0b-8285-a02633a7cd37-operator-scripts\") pod \"glance-ec8f-account-create-update-wm9f2\" (UID: \"38148c07-9662-4f0b-8285-a02633a7cd37\") " pod="openstack/glance-ec8f-account-create-update-wm9f2" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.266385 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-5t7jn"] Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.267095 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38148c07-9662-4f0b-8285-a02633a7cd37-operator-scripts\") pod \"glance-ec8f-account-create-update-wm9f2\" (UID: \"38148c07-9662-4f0b-8285-a02633a7cd37\") " pod="openstack/glance-ec8f-account-create-update-wm9f2" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.287034 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnz5l\" (UniqueName: \"kubernetes.io/projected/38148c07-9662-4f0b-8285-a02633a7cd37-kube-api-access-rnz5l\") pod \"glance-ec8f-account-create-update-wm9f2\" (UID: \"38148c07-9662-4f0b-8285-a02633a7cd37\") " pod="openstack/glance-ec8f-account-create-update-wm9f2" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.321195 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vmdbt" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.432912 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-64l8r" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.449902 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ec8f-account-create-update-wm9f2" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.468461 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxf54\" (UniqueName: \"kubernetes.io/projected/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-kube-api-access-mxf54\") pod \"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1\" (UID: \"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1\") " Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.468524 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-operator-scripts\") pod \"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1\" (UID: \"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1\") " Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.469330 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1" (UID: "b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.474013 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-kube-api-access-mxf54" (OuterVolumeSpecName: "kube-api-access-mxf54") pod "b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1" (UID: "b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1"). InnerVolumeSpecName "kube-api-access-mxf54". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.571030 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.571119 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mxf54\" (UniqueName: \"kubernetes.io/projected/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-kube-api-access-mxf54\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.571164 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:39 crc kubenswrapper[4804]: E0128 11:40:39.571281 4804 projected.go:288] Couldn't get configMap openstack/swift-ring-files: configmap "swift-ring-files" not found Jan 28 11:40:39 crc kubenswrapper[4804]: E0128 11:40:39.571299 4804 projected.go:194] Error preparing data for projected volume etc-swift for pod openstack/swift-storage-0: configmap "swift-ring-files" not found Jan 28 11:40:39 crc kubenswrapper[4804]: E0128 11:40:39.571351 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift podName:f452e749-06e2-4b9c-a4d7-8a63ccd07cfc nodeName:}" failed. No retries permitted until 2026-01-28 11:40:47.57133103 +0000 UTC m=+1123.366211014 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift") pod "swift-storage-0" (UID: "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc") : configmap "swift-ring-files" not found Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.578440 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f8f4-account-create-update-mg2gd"] Jan 28 11:40:39 crc kubenswrapper[4804]: W0128 11:40:39.581672 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda4586997_59ed_4e13_b7ec_3146711f642c.slice/crio-9b946f5044f4683cff756fe0f12a25635572e5f13216b661e8d5088a9dd3482b WatchSource:0}: Error finding container 9b946f5044f4683cff756fe0f12a25635572e5f13216b661e8d5088a9dd3482b: Status 404 returned error can't find the container with id 9b946f5044f4683cff756fe0f12a25635572e5f13216b661e8d5088a9dd3482b Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.689557 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zvgmg"] Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.711815 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-ea29-account-create-update-fd9sb"] Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.821380 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-vmdbt"] Jan 28 11:40:39 crc kubenswrapper[4804]: W0128 11:40:39.835604 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod903b6b99_b94d_428a_9c9c_7465ef27ad40.slice/crio-ab3f0d513ec7bb60cd7ed02802956a145c0323826eed1b2f7bb7ee645c397fe0 WatchSource:0}: Error finding container ab3f0d513ec7bb60cd7ed02802956a145c0323826eed1b2f7bb7ee645c397fe0: Status 404 returned error can't find the container with id ab3f0d513ec7bb60cd7ed02802956a145c0323826eed1b2f7bb7ee645c397fe0 Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.992009 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-ec8f-account-create-update-wm9f2"] Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.992502 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ea29-account-create-update-fd9sb" event={"ID":"08795da4-549f-437a-9113-51d1003b5668","Type":"ContainerStarted","Data":"ddb1f30d4961cdeec5b26416a480e4c0b1a3e9e39eedab64e0edf4f1452782c2"} Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.996602 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-64l8r" event={"ID":"b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1","Type":"ContainerDied","Data":"14ca1244796137d0c6b3dfdc5bf8667213bd1467f526fa625705496eede10232"} Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.996637 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-64l8r" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.996653 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14ca1244796137d0c6b3dfdc5bf8667213bd1467f526fa625705496eede10232" Jan 28 11:40:39 crc kubenswrapper[4804]: I0128 11:40:39.998058 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f8f4-account-create-update-mg2gd" event={"ID":"a4586997-59ed-4e13-b7ec-3146711f642c","Type":"ContainerStarted","Data":"9b946f5044f4683cff756fe0f12a25635572e5f13216b661e8d5088a9dd3482b"} Jan 28 11:40:40 crc kubenswrapper[4804]: I0128 11:40:39.999936 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5t7jn" event={"ID":"54fa6273-e08e-4dbb-a86b-a8951e4100fa","Type":"ContainerStarted","Data":"5575fa4ddc8773670c0f493f88df21ff86a53d01b7736599cdb3fe2b123bacad"} Jan 28 11:40:40 crc kubenswrapper[4804]: I0128 11:40:40.000023 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5t7jn" event={"ID":"54fa6273-e08e-4dbb-a86b-a8951e4100fa","Type":"ContainerStarted","Data":"f3890669a2cd664aad88617cbeaf1f93a1a4048bcda428a191c8ef4e1d58137a"} Jan 28 11:40:40 crc kubenswrapper[4804]: I0128 11:40:40.002911 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vmdbt" event={"ID":"903b6b99-b94d-428a-9c9c-7465ef27ad40","Type":"ContainerStarted","Data":"ab3f0d513ec7bb60cd7ed02802956a145c0323826eed1b2f7bb7ee645c397fe0"} Jan 28 11:40:40 crc kubenswrapper[4804]: I0128 11:40:40.003837 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zvgmg" event={"ID":"8b1029fc-e131-4d00-b538-6f0a17674c75","Type":"ContainerStarted","Data":"75293dc771af25680556ee3acb3f64f045ce3898abcd88a264facd4d2213169b"} Jan 28 11:40:40 crc kubenswrapper[4804]: I0128 11:40:40.026494 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-create-5t7jn" podStartSLOduration=2.026470297 podStartE2EDuration="2.026470297s" podCreationTimestamp="2026-01-28 11:40:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:40.020642351 +0000 UTC m=+1115.815522335" watchObservedRunningTime="2026-01-28 11:40:40.026470297 +0000 UTC m=+1115.821350281" Jan 28 11:40:40 crc kubenswrapper[4804]: I0128 11:40:40.947847 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:40:41 crc kubenswrapper[4804]: I0128 11:40:41.013762 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ec8f-account-create-update-wm9f2" event={"ID":"38148c07-9662-4f0b-8285-a02633a7cd37","Type":"ContainerStarted","Data":"1cdec6eb1be633affff1b7b15a04d38540b48582466e56d387986c60aa1a5c76"} Jan 28 11:40:41 crc kubenswrapper[4804]: I0128 11:40:41.017409 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vnmsg"] Jan 28 11:40:41 crc kubenswrapper[4804]: I0128 11:40:41.017661 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-8554648995-vnmsg" podUID="a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" containerName="dnsmasq-dns" containerID="cri-o://5418c7b289f056b4f05b6a342643efaf91956352be4b6ee33c7e9e02353ffd7b" gracePeriod=10 Jan 28 11:40:41 crc kubenswrapper[4804]: I0128 11:40:41.042819 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vmdbt" event={"ID":"903b6b99-b94d-428a-9c9c-7465ef27ad40","Type":"ContainerStarted","Data":"f3135f22df67a9f998ea737f7764f24294ba0c3f0ee5a1682b6d2623e608a549"} Jan 28 11:40:41 crc kubenswrapper[4804]: I0128 11:40:41.044798 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zvgmg" event={"ID":"8b1029fc-e131-4d00-b538-6f0a17674c75","Type":"ContainerStarted","Data":"61f6d7d8df2b93d1c2aa1ade5c1c81fe0cb73ba040cbf0a84450d89f676d1c96"} Jan 28 11:40:41 crc kubenswrapper[4804]: I0128 11:40:41.046715 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ea29-account-create-update-fd9sb" event={"ID":"08795da4-549f-437a-9113-51d1003b5668","Type":"ContainerStarted","Data":"f7789d2bdd1334c4462a3af29ff8ca19fc4d47aa63dc768208c1612ddcee666a"} Jan 28 11:40:41 crc kubenswrapper[4804]: I0128 11:40:41.048386 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f8f4-account-create-update-mg2gd" event={"ID":"a4586997-59ed-4e13-b7ec-3146711f642c","Type":"ContainerStarted","Data":"07d005b2c14a47d4da694ee14fd26759eafe1775650f3812e43c2a15c848c61f"} Jan 28 11:40:41 crc kubenswrapper[4804]: I0128 11:40:41.838233 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-8554648995-vnmsg" podUID="a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.114:5353: connect: connection refused" Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.074455 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ec8f-account-create-update-wm9f2" event={"ID":"38148c07-9662-4f0b-8285-a02633a7cd37","Type":"ContainerStarted","Data":"33b6a6135853b57c0111bf580d3d2c2cfc12a6ddcba054451c960f37e0cda40d"} Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.077657 4804 generic.go:334] "Generic (PLEG): container finished" podID="54fa6273-e08e-4dbb-a86b-a8951e4100fa" containerID="5575fa4ddc8773670c0f493f88df21ff86a53d01b7736599cdb3fe2b123bacad" exitCode=0 Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.077737 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5t7jn" event={"ID":"54fa6273-e08e-4dbb-a86b-a8951e4100fa","Type":"ContainerDied","Data":"5575fa4ddc8773670c0f493f88df21ff86a53d01b7736599cdb3fe2b123bacad"} Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.079993 4804 generic.go:334] "Generic (PLEG): container finished" podID="a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" containerID="5418c7b289f056b4f05b6a342643efaf91956352be4b6ee33c7e9e02353ffd7b" exitCode=0 Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.080086 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vnmsg" event={"ID":"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c","Type":"ContainerDied","Data":"5418c7b289f056b4f05b6a342643efaf91956352be4b6ee33c7e9e02353ffd7b"} Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.093389 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-ec8f-account-create-update-wm9f2" podStartSLOduration=3.093367041 podStartE2EDuration="3.093367041s" podCreationTimestamp="2026-01-28 11:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:42.092444263 +0000 UTC m=+1117.887324257" watchObservedRunningTime="2026-01-28 11:40:42.093367041 +0000 UTC m=+1117.888247025" Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.108006 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-64l8r"] Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.114812 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-64l8r"] Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.140499 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-create-zvgmg" podStartSLOduration=4.140481001 podStartE2EDuration="4.140481001s" podCreationTimestamp="2026-01-28 11:40:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:42.135813942 +0000 UTC m=+1117.930693946" watchObservedRunningTime="2026-01-28 11:40:42.140481001 +0000 UTC m=+1117.935360985" Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.164553 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-vmdbt" podStartSLOduration=4.164519296 podStartE2EDuration="4.164519296s" podCreationTimestamp="2026-01-28 11:40:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:42.156402238 +0000 UTC m=+1117.951282222" watchObservedRunningTime="2026-01-28 11:40:42.164519296 +0000 UTC m=+1117.959399280" Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.175319 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-ea29-account-create-update-fd9sb" podStartSLOduration=4.175274598 podStartE2EDuration="4.175274598s" podCreationTimestamp="2026-01-28 11:40:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:42.168280346 +0000 UTC m=+1117.963160340" watchObservedRunningTime="2026-01-28 11:40:42.175274598 +0000 UTC m=+1117.970154582" Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.211143 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-f8f4-account-create-update-mg2gd" podStartSLOduration=4.211109109 podStartE2EDuration="4.211109109s" podCreationTimestamp="2026-01-28 11:40:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:42.183108758 +0000 UTC m=+1117.977988752" watchObservedRunningTime="2026-01-28 11:40:42.211109109 +0000 UTC m=+1118.005989093" Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.582763 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.582836 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:40:42 crc kubenswrapper[4804]: I0128 11:40:42.938664 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1" path="/var/lib/kubelet/pods/b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1/volumes" Jan 28 11:40:43 crc kubenswrapper[4804]: I0128 11:40:43.111466 4804 generic.go:334] "Generic (PLEG): container finished" podID="903b6b99-b94d-428a-9c9c-7465ef27ad40" containerID="f3135f22df67a9f998ea737f7764f24294ba0c3f0ee5a1682b6d2623e608a549" exitCode=0 Jan 28 11:40:43 crc kubenswrapper[4804]: I0128 11:40:43.111662 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vmdbt" event={"ID":"903b6b99-b94d-428a-9c9c-7465ef27ad40","Type":"ContainerDied","Data":"f3135f22df67a9f998ea737f7764f24294ba0c3f0ee5a1682b6d2623e608a549"} Jan 28 11:40:43 crc kubenswrapper[4804]: I0128 11:40:43.115262 4804 generic.go:334] "Generic (PLEG): container finished" podID="8b1029fc-e131-4d00-b538-6f0a17674c75" containerID="61f6d7d8df2b93d1c2aa1ade5c1c81fe0cb73ba040cbf0a84450d89f676d1c96" exitCode=0 Jan 28 11:40:43 crc kubenswrapper[4804]: I0128 11:40:43.115945 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zvgmg" event={"ID":"8b1029fc-e131-4d00-b538-6f0a17674c75","Type":"ContainerDied","Data":"61f6d7d8df2b93d1c2aa1ade5c1c81fe0cb73ba040cbf0a84450d89f676d1c96"} Jan 28 11:40:43 crc kubenswrapper[4804]: I0128 11:40:43.430196 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5t7jn" Jan 28 11:40:43 crc kubenswrapper[4804]: I0128 11:40:43.464741 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhpch\" (UniqueName: \"kubernetes.io/projected/54fa6273-e08e-4dbb-a86b-a8951e4100fa-kube-api-access-mhpch\") pod \"54fa6273-e08e-4dbb-a86b-a8951e4100fa\" (UID: \"54fa6273-e08e-4dbb-a86b-a8951e4100fa\") " Jan 28 11:40:43 crc kubenswrapper[4804]: I0128 11:40:43.465127 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54fa6273-e08e-4dbb-a86b-a8951e4100fa-operator-scripts\") pod \"54fa6273-e08e-4dbb-a86b-a8951e4100fa\" (UID: \"54fa6273-e08e-4dbb-a86b-a8951e4100fa\") " Jan 28 11:40:43 crc kubenswrapper[4804]: I0128 11:40:43.465984 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54fa6273-e08e-4dbb-a86b-a8951e4100fa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "54fa6273-e08e-4dbb-a86b-a8951e4100fa" (UID: "54fa6273-e08e-4dbb-a86b-a8951e4100fa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:43 crc kubenswrapper[4804]: I0128 11:40:43.485635 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54fa6273-e08e-4dbb-a86b-a8951e4100fa-kube-api-access-mhpch" (OuterVolumeSpecName: "kube-api-access-mhpch") pod "54fa6273-e08e-4dbb-a86b-a8951e4100fa" (UID: "54fa6273-e08e-4dbb-a86b-a8951e4100fa"). InnerVolumeSpecName "kube-api-access-mhpch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:43 crc kubenswrapper[4804]: I0128 11:40:43.567463 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhpch\" (UniqueName: \"kubernetes.io/projected/54fa6273-e08e-4dbb-a86b-a8951e4100fa-kube-api-access-mhpch\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:43 crc kubenswrapper[4804]: I0128 11:40:43.567504 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/54fa6273-e08e-4dbb-a86b-a8951e4100fa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.122779 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-5t7jn" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.122774 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-5t7jn" event={"ID":"54fa6273-e08e-4dbb-a86b-a8951e4100fa","Type":"ContainerDied","Data":"f3890669a2cd664aad88617cbeaf1f93a1a4048bcda428a191c8ef4e1d58137a"} Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.123226 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3890669a2cd664aad88617cbeaf1f93a1a4048bcda428a191c8ef4e1d58137a" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.123904 4804 generic.go:334] "Generic (PLEG): container finished" podID="f7c5c969-c4c2-4f76-b3c6-152473159e78" containerID="b936b1f85b5d914a16d472ff712a5db48c0674a29e82c956ccf023610946a7cb" exitCode=0 Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.123952 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7c5c969-c4c2-4f76-b3c6-152473159e78","Type":"ContainerDied","Data":"b936b1f85b5d914a16d472ff712a5db48c0674a29e82c956ccf023610946a7cb"} Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.139261 4804 generic.go:334] "Generic (PLEG): container finished" podID="76d127f1-97d9-4552-9bdb-b3482a45951d" containerID="938917cd0b60c23765326c3b0e216a34a5756c286f26d1223873445f92cad09a" exitCode=0 Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.139351 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"76d127f1-97d9-4552-9bdb-b3482a45951d","Type":"ContainerDied","Data":"938917cd0b60c23765326c3b0e216a34a5756c286f26d1223873445f92cad09a"} Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.271828 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.380425 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-nb\") pod \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.380492 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-dns-svc\") pod \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.380693 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9q9hh\" (UniqueName: \"kubernetes.io/projected/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-kube-api-access-9q9hh\") pod \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.380770 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-sb\") pod \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.380853 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-config\") pod \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\" (UID: \"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c\") " Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.389675 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-kube-api-access-9q9hh" (OuterVolumeSpecName: "kube-api-access-9q9hh") pod "a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" (UID: "a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c"). InnerVolumeSpecName "kube-api-access-9q9hh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.418835 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-config" (OuterVolumeSpecName: "config") pod "a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" (UID: "a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.425515 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" (UID: "a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.438636 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" (UID: "a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.445807 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" (UID: "a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.483509 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9q9hh\" (UniqueName: \"kubernetes.io/projected/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-kube-api-access-9q9hh\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.483543 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.483552 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.483560 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.483569 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.563405 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zvgmg" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.568632 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vmdbt" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.584642 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b1029fc-e131-4d00-b538-6f0a17674c75-operator-scripts\") pod \"8b1029fc-e131-4d00-b538-6f0a17674c75\" (UID: \"8b1029fc-e131-4d00-b538-6f0a17674c75\") " Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.584689 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-297x5\" (UniqueName: \"kubernetes.io/projected/903b6b99-b94d-428a-9c9c-7465ef27ad40-kube-api-access-297x5\") pod \"903b6b99-b94d-428a-9c9c-7465ef27ad40\" (UID: \"903b6b99-b94d-428a-9c9c-7465ef27ad40\") " Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.584851 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/903b6b99-b94d-428a-9c9c-7465ef27ad40-operator-scripts\") pod \"903b6b99-b94d-428a-9c9c-7465ef27ad40\" (UID: \"903b6b99-b94d-428a-9c9c-7465ef27ad40\") " Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.584908 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g97lw\" (UniqueName: \"kubernetes.io/projected/8b1029fc-e131-4d00-b538-6f0a17674c75-kube-api-access-g97lw\") pod \"8b1029fc-e131-4d00-b538-6f0a17674c75\" (UID: \"8b1029fc-e131-4d00-b538-6f0a17674c75\") " Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.587794 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/903b6b99-b94d-428a-9c9c-7465ef27ad40-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "903b6b99-b94d-428a-9c9c-7465ef27ad40" (UID: "903b6b99-b94d-428a-9c9c-7465ef27ad40"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.591568 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b1029fc-e131-4d00-b538-6f0a17674c75-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8b1029fc-e131-4d00-b538-6f0a17674c75" (UID: "8b1029fc-e131-4d00-b538-6f0a17674c75"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.591976 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b1029fc-e131-4d00-b538-6f0a17674c75-kube-api-access-g97lw" (OuterVolumeSpecName: "kube-api-access-g97lw") pod "8b1029fc-e131-4d00-b538-6f0a17674c75" (UID: "8b1029fc-e131-4d00-b538-6f0a17674c75"). InnerVolumeSpecName "kube-api-access-g97lw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.595551 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/903b6b99-b94d-428a-9c9c-7465ef27ad40-kube-api-access-297x5" (OuterVolumeSpecName: "kube-api-access-297x5") pod "903b6b99-b94d-428a-9c9c-7465ef27ad40" (UID: "903b6b99-b94d-428a-9c9c-7465ef27ad40"). InnerVolumeSpecName "kube-api-access-297x5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.686582 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8b1029fc-e131-4d00-b538-6f0a17674c75-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.686623 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-297x5\" (UniqueName: \"kubernetes.io/projected/903b6b99-b94d-428a-9c9c-7465ef27ad40-kube-api-access-297x5\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.686640 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/903b6b99-b94d-428a-9c9c-7465ef27ad40-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:44 crc kubenswrapper[4804]: I0128 11:40:44.686656 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g97lw\" (UniqueName: \"kubernetes.io/projected/8b1029fc-e131-4d00-b538-6f0a17674c75-kube-api-access-g97lw\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.148985 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-8554648995-vnmsg" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.149082 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-8554648995-vnmsg" event={"ID":"a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c","Type":"ContainerDied","Data":"1020242618b1dde6f3aaf71e5aba360d809d5638f6a17f243b95e504e340485b"} Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.149150 4804 scope.go:117] "RemoveContainer" containerID="5418c7b289f056b4f05b6a342643efaf91956352be4b6ee33c7e9e02353ffd7b" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.151967 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zvgmg" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.152684 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zvgmg" event={"ID":"8b1029fc-e131-4d00-b538-6f0a17674c75","Type":"ContainerDied","Data":"75293dc771af25680556ee3acb3f64f045ce3898abcd88a264facd4d2213169b"} Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.152722 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75293dc771af25680556ee3acb3f64f045ce3898abcd88a264facd4d2213169b" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.155739 4804 generic.go:334] "Generic (PLEG): container finished" podID="08795da4-549f-437a-9113-51d1003b5668" containerID="f7789d2bdd1334c4462a3af29ff8ca19fc4d47aa63dc768208c1612ddcee666a" exitCode=0 Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.155806 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ea29-account-create-update-fd9sb" event={"ID":"08795da4-549f-437a-9113-51d1003b5668","Type":"ContainerDied","Data":"f7789d2bdd1334c4462a3af29ff8ca19fc4d47aa63dc768208c1612ddcee666a"} Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.157218 4804 generic.go:334] "Generic (PLEG): container finished" podID="a4586997-59ed-4e13-b7ec-3146711f642c" containerID="07d005b2c14a47d4da694ee14fd26759eafe1775650f3812e43c2a15c848c61f" exitCode=0 Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.157285 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f8f4-account-create-update-mg2gd" event={"ID":"a4586997-59ed-4e13-b7ec-3146711f642c","Type":"ContainerDied","Data":"07d005b2c14a47d4da694ee14fd26759eafe1775650f3812e43c2a15c848c61f"} Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.159199 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"76d127f1-97d9-4552-9bdb-b3482a45951d","Type":"ContainerStarted","Data":"a7bcd4c4937ab18a41cb4959a39743e78382843e721b78db4c0a6c20de518e0c"} Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.159381 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.160438 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7c5c969-c4c2-4f76-b3c6-152473159e78","Type":"ContainerStarted","Data":"95dfda03211e6c344c512015a17826e376bdb3ad7fb59bc5821bb495def03e2b"} Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.160826 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.164283 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-vmdbt" event={"ID":"903b6b99-b94d-428a-9c9c-7465ef27ad40","Type":"ContainerDied","Data":"ab3f0d513ec7bb60cd7ed02802956a145c0323826eed1b2f7bb7ee645c397fe0"} Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.164332 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab3f0d513ec7bb60cd7ed02802956a145c0323826eed1b2f7bb7ee645c397fe0" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.164421 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-vmdbt" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.202733 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=61.202712416 podStartE2EDuration="1m1.202712416s" podCreationTimestamp="2026-01-28 11:39:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:45.198107498 +0000 UTC m=+1120.992987482" watchObservedRunningTime="2026-01-28 11:40:45.202712416 +0000 UTC m=+1120.997592400" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.221010 4804 scope.go:117] "RemoveContainer" containerID="a99548645bbd8f2136f9f7fb1affc4d254741865c846f3d3f9116fc59fc1d178" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.234505 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=39.439972743 podStartE2EDuration="1m2.234486916s" podCreationTimestamp="2026-01-28 11:39:43 +0000 UTC" firstStartedPulling="2026-01-28 11:39:45.417277105 +0000 UTC m=+1061.212157089" lastFinishedPulling="2026-01-28 11:40:08.211791278 +0000 UTC m=+1084.006671262" observedRunningTime="2026-01-28 11:40:45.227849786 +0000 UTC m=+1121.022729770" watchObservedRunningTime="2026-01-28 11:40:45.234486916 +0000 UTC m=+1121.029366900" Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.280177 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vnmsg"] Jan 28 11:40:45 crc kubenswrapper[4804]: I0128 11:40:45.289505 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-8554648995-vnmsg"] Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.173818 4804 generic.go:334] "Generic (PLEG): container finished" podID="38148c07-9662-4f0b-8285-a02633a7cd37" containerID="33b6a6135853b57c0111bf580d3d2c2cfc12a6ddcba054451c960f37e0cda40d" exitCode=0 Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.173912 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ec8f-account-create-update-wm9f2" event={"ID":"38148c07-9662-4f0b-8285-a02633a7cd37","Type":"ContainerDied","Data":"33b6a6135853b57c0111bf580d3d2c2cfc12a6ddcba054451c960f37e0cda40d"} Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.504004 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ea29-account-create-update-fd9sb" Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.619819 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7k2hk\" (UniqueName: \"kubernetes.io/projected/08795da4-549f-437a-9113-51d1003b5668-kube-api-access-7k2hk\") pod \"08795da4-549f-437a-9113-51d1003b5668\" (UID: \"08795da4-549f-437a-9113-51d1003b5668\") " Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.619973 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08795da4-549f-437a-9113-51d1003b5668-operator-scripts\") pod \"08795da4-549f-437a-9113-51d1003b5668\" (UID: \"08795da4-549f-437a-9113-51d1003b5668\") " Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.620352 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08795da4-549f-437a-9113-51d1003b5668-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "08795da4-549f-437a-9113-51d1003b5668" (UID: "08795da4-549f-437a-9113-51d1003b5668"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.620618 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/08795da4-549f-437a-9113-51d1003b5668-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.629148 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08795da4-549f-437a-9113-51d1003b5668-kube-api-access-7k2hk" (OuterVolumeSpecName: "kube-api-access-7k2hk") pod "08795da4-549f-437a-9113-51d1003b5668" (UID: "08795da4-549f-437a-9113-51d1003b5668"). InnerVolumeSpecName "kube-api-access-7k2hk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.668983 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f8f4-account-create-update-mg2gd" Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.722281 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7k2hk\" (UniqueName: \"kubernetes.io/projected/08795da4-549f-437a-9113-51d1003b5668-kube-api-access-7k2hk\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.737562 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.823395 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x26qf\" (UniqueName: \"kubernetes.io/projected/a4586997-59ed-4e13-b7ec-3146711f642c-kube-api-access-x26qf\") pod \"a4586997-59ed-4e13-b7ec-3146711f642c\" (UID: \"a4586997-59ed-4e13-b7ec-3146711f642c\") " Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.823570 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4586997-59ed-4e13-b7ec-3146711f642c-operator-scripts\") pod \"a4586997-59ed-4e13-b7ec-3146711f642c\" (UID: \"a4586997-59ed-4e13-b7ec-3146711f642c\") " Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.823969 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a4586997-59ed-4e13-b7ec-3146711f642c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "a4586997-59ed-4e13-b7ec-3146711f642c" (UID: "a4586997-59ed-4e13-b7ec-3146711f642c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.824103 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/a4586997-59ed-4e13-b7ec-3146711f642c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.828032 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a4586997-59ed-4e13-b7ec-3146711f642c-kube-api-access-x26qf" (OuterVolumeSpecName: "kube-api-access-x26qf") pod "a4586997-59ed-4e13-b7ec-3146711f642c" (UID: "a4586997-59ed-4e13-b7ec-3146711f642c"). InnerVolumeSpecName "kube-api-access-x26qf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.924301 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" path="/var/lib/kubelet/pods/a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c/volumes" Jan 28 11:40:46 crc kubenswrapper[4804]: I0128 11:40:46.926682 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x26qf\" (UniqueName: \"kubernetes.io/projected/a4586997-59ed-4e13-b7ec-3146711f642c-kube-api-access-x26qf\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.093323 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-w544f"] Jan 28 11:40:47 crc kubenswrapper[4804]: E0128 11:40:47.094037 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1" containerName="mariadb-account-create-update" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.094186 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1" containerName="mariadb-account-create-update" Jan 28 11:40:47 crc kubenswrapper[4804]: E0128 11:40:47.094299 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a4586997-59ed-4e13-b7ec-3146711f642c" containerName="mariadb-account-create-update" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.094392 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="a4586997-59ed-4e13-b7ec-3146711f642c" containerName="mariadb-account-create-update" Jan 28 11:40:47 crc kubenswrapper[4804]: E0128 11:40:47.094487 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b1029fc-e131-4d00-b538-6f0a17674c75" containerName="mariadb-database-create" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.094586 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b1029fc-e131-4d00-b538-6f0a17674c75" containerName="mariadb-database-create" Jan 28 11:40:47 crc kubenswrapper[4804]: E0128 11:40:47.094680 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" containerName="dnsmasq-dns" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.094782 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" containerName="dnsmasq-dns" Jan 28 11:40:47 crc kubenswrapper[4804]: E0128 11:40:47.094873 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="54fa6273-e08e-4dbb-a86b-a8951e4100fa" containerName="mariadb-database-create" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.094959 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="54fa6273-e08e-4dbb-a86b-a8951e4100fa" containerName="mariadb-database-create" Jan 28 11:40:47 crc kubenswrapper[4804]: E0128 11:40:47.095079 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08795da4-549f-437a-9113-51d1003b5668" containerName="mariadb-account-create-update" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.095169 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="08795da4-549f-437a-9113-51d1003b5668" containerName="mariadb-account-create-update" Jan 28 11:40:47 crc kubenswrapper[4804]: E0128 11:40:47.095254 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="903b6b99-b94d-428a-9c9c-7465ef27ad40" containerName="mariadb-database-create" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.095330 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="903b6b99-b94d-428a-9c9c-7465ef27ad40" containerName="mariadb-database-create" Jan 28 11:40:47 crc kubenswrapper[4804]: E0128 11:40:47.095387 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" containerName="init" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.095453 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" containerName="init" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.095671 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="08795da4-549f-437a-9113-51d1003b5668" containerName="mariadb-account-create-update" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.095745 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="a31c7f4f-6e39-4542-b3f8-d5bfdcc0831c" containerName="dnsmasq-dns" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.095814 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="54fa6273-e08e-4dbb-a86b-a8951e4100fa" containerName="mariadb-database-create" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.095872 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="a4586997-59ed-4e13-b7ec-3146711f642c" containerName="mariadb-account-create-update" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.095957 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b1029fc-e131-4d00-b538-6f0a17674c75" containerName="mariadb-database-create" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.096019 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="b24cf506-8c35-4dd4-8c9b-f4fe2d6a1fd1" containerName="mariadb-account-create-update" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.096097 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="903b6b99-b94d-428a-9c9c-7465ef27ad40" containerName="mariadb-database-create" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.096924 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-w544f" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.099183 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-cell1-mariadb-root-db-secret" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.110681 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-w544f"] Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.185254 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-f8f4-account-create-update-mg2gd" event={"ID":"a4586997-59ed-4e13-b7ec-3146711f642c","Type":"ContainerDied","Data":"9b946f5044f4683cff756fe0f12a25635572e5f13216b661e8d5088a9dd3482b"} Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.185304 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b946f5044f4683cff756fe0f12a25635572e5f13216b661e8d5088a9dd3482b" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.185287 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f8f4-account-create-update-mg2gd" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.190981 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-ea29-account-create-update-fd9sb" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.191562 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-ea29-account-create-update-fd9sb" event={"ID":"08795da4-549f-437a-9113-51d1003b5668","Type":"ContainerDied","Data":"ddb1f30d4961cdeec5b26416a480e4c0b1a3e9e39eedab64e0edf4f1452782c2"} Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.191581 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ddb1f30d4961cdeec5b26416a480e4c0b1a3e9e39eedab64e0edf4f1452782c2" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.232903 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rxf4\" (UniqueName: \"kubernetes.io/projected/da587a6a-8109-4c08-8395-f4cd6b078dc7-kube-api-access-8rxf4\") pod \"root-account-create-update-w544f\" (UID: \"da587a6a-8109-4c08-8395-f4cd6b078dc7\") " pod="openstack/root-account-create-update-w544f" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.233338 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da587a6a-8109-4c08-8395-f4cd6b078dc7-operator-scripts\") pod \"root-account-create-update-w544f\" (UID: \"da587a6a-8109-4c08-8395-f4cd6b078dc7\") " pod="openstack/root-account-create-update-w544f" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.336313 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rxf4\" (UniqueName: \"kubernetes.io/projected/da587a6a-8109-4c08-8395-f4cd6b078dc7-kube-api-access-8rxf4\") pod \"root-account-create-update-w544f\" (UID: \"da587a6a-8109-4c08-8395-f4cd6b078dc7\") " pod="openstack/root-account-create-update-w544f" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.336445 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da587a6a-8109-4c08-8395-f4cd6b078dc7-operator-scripts\") pod \"root-account-create-update-w544f\" (UID: \"da587a6a-8109-4c08-8395-f4cd6b078dc7\") " pod="openstack/root-account-create-update-w544f" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.337160 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da587a6a-8109-4c08-8395-f4cd6b078dc7-operator-scripts\") pod \"root-account-create-update-w544f\" (UID: \"da587a6a-8109-4c08-8395-f4cd6b078dc7\") " pod="openstack/root-account-create-update-w544f" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.355170 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rxf4\" (UniqueName: \"kubernetes.io/projected/da587a6a-8109-4c08-8395-f4cd6b078dc7-kube-api-access-8rxf4\") pod \"root-account-create-update-w544f\" (UID: \"da587a6a-8109-4c08-8395-f4cd6b078dc7\") " pod="openstack/root-account-create-update-w544f" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.421132 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-w544f" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.586968 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ec8f-account-create-update-wm9f2" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.647660 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.678862 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift\") pod \"swift-storage-0\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " pod="openstack/swift-storage-0" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.748839 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38148c07-9662-4f0b-8285-a02633a7cd37-operator-scripts\") pod \"38148c07-9662-4f0b-8285-a02633a7cd37\" (UID: \"38148c07-9662-4f0b-8285-a02633a7cd37\") " Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.749025 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnz5l\" (UniqueName: \"kubernetes.io/projected/38148c07-9662-4f0b-8285-a02633a7cd37-kube-api-access-rnz5l\") pod \"38148c07-9662-4f0b-8285-a02633a7cd37\" (UID: \"38148c07-9662-4f0b-8285-a02633a7cd37\") " Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.750111 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38148c07-9662-4f0b-8285-a02633a7cd37-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "38148c07-9662-4f0b-8285-a02633a7cd37" (UID: "38148c07-9662-4f0b-8285-a02633a7cd37"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.752785 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38148c07-9662-4f0b-8285-a02633a7cd37-kube-api-access-rnz5l" (OuterVolumeSpecName: "kube-api-access-rnz5l") pod "38148c07-9662-4f0b-8285-a02633a7cd37" (UID: "38148c07-9662-4f0b-8285-a02633a7cd37"). InnerVolumeSpecName "kube-api-access-rnz5l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.851306 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/38148c07-9662-4f0b-8285-a02633a7cd37-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.851343 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnz5l\" (UniqueName: \"kubernetes.io/projected/38148c07-9662-4f0b-8285-a02633a7cd37-kube-api-access-rnz5l\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.940014 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 11:40:47 crc kubenswrapper[4804]: I0128 11:40:47.949254 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-w544f"] Jan 28 11:40:47 crc kubenswrapper[4804]: W0128 11:40:47.955978 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podda587a6a_8109_4c08_8395_f4cd6b078dc7.slice/crio-6fd7b7585096ea4903a88faf2296a1b1df346f4826181a40d086b11a9e71ea76 WatchSource:0}: Error finding container 6fd7b7585096ea4903a88faf2296a1b1df346f4826181a40d086b11a9e71ea76: Status 404 returned error can't find the container with id 6fd7b7585096ea4903a88faf2296a1b1df346f4826181a40d086b11a9e71ea76 Jan 28 11:40:48 crc kubenswrapper[4804]: I0128 11:40:48.197857 4804 generic.go:334] "Generic (PLEG): container finished" podID="cb46a04b-0e73-46fb-bcdf-a670c30d5531" containerID="acc629a29baa94b90886caa052a9712308190fcbd858f031b8ca85b990fe85e5" exitCode=0 Jan 28 11:40:48 crc kubenswrapper[4804]: I0128 11:40:48.197937 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jxgc9" event={"ID":"cb46a04b-0e73-46fb-bcdf-a670c30d5531","Type":"ContainerDied","Data":"acc629a29baa94b90886caa052a9712308190fcbd858f031b8ca85b990fe85e5"} Jan 28 11:40:48 crc kubenswrapper[4804]: I0128 11:40:48.200188 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-ec8f-account-create-update-wm9f2" event={"ID":"38148c07-9662-4f0b-8285-a02633a7cd37","Type":"ContainerDied","Data":"1cdec6eb1be633affff1b7b15a04d38540b48582466e56d387986c60aa1a5c76"} Jan 28 11:40:48 crc kubenswrapper[4804]: I0128 11:40:48.200218 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1cdec6eb1be633affff1b7b15a04d38540b48582466e56d387986c60aa1a5c76" Jan 28 11:40:48 crc kubenswrapper[4804]: I0128 11:40:48.200266 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-ec8f-account-create-update-wm9f2" Jan 28 11:40:48 crc kubenswrapper[4804]: I0128 11:40:48.202954 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-w544f" event={"ID":"da587a6a-8109-4c08-8395-f4cd6b078dc7","Type":"ContainerStarted","Data":"1458a9f0fdf6329fef09a5d8735c3d60b67ac3518f533ed20b00b17805f5df6e"} Jan 28 11:40:48 crc kubenswrapper[4804]: I0128 11:40:48.202987 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-w544f" event={"ID":"da587a6a-8109-4c08-8395-f4cd6b078dc7","Type":"ContainerStarted","Data":"6fd7b7585096ea4903a88faf2296a1b1df346f4826181a40d086b11a9e71ea76"} Jan 28 11:40:48 crc kubenswrapper[4804]: I0128 11:40:48.241046 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/root-account-create-update-w544f" podStartSLOduration=1.241031468 podStartE2EDuration="1.241031468s" podCreationTimestamp="2026-01-28 11:40:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:40:48.231461884 +0000 UTC m=+1124.026341868" watchObservedRunningTime="2026-01-28 11:40:48.241031468 +0000 UTC m=+1124.035911452" Jan 28 11:40:48 crc kubenswrapper[4804]: I0128 11:40:48.502027 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-storage-0"] Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.210551 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"4a5bec567872839575faf98626366f5cc236d0134aa37c746f2c87478bb70e91"} Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.211502 4804 generic.go:334] "Generic (PLEG): container finished" podID="da587a6a-8109-4c08-8395-f4cd6b078dc7" containerID="1458a9f0fdf6329fef09a5d8735c3d60b67ac3518f533ed20b00b17805f5df6e" exitCode=0 Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.211970 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-w544f" event={"ID":"da587a6a-8109-4c08-8395-f4cd6b078dc7","Type":"ContainerDied","Data":"1458a9f0fdf6329fef09a5d8735c3d60b67ac3518f533ed20b00b17805f5df6e"} Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.324843 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-bnpvd"] Jan 28 11:40:49 crc kubenswrapper[4804]: E0128 11:40:49.325682 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38148c07-9662-4f0b-8285-a02633a7cd37" containerName="mariadb-account-create-update" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.325694 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="38148c07-9662-4f0b-8285-a02633a7cd37" containerName="mariadb-account-create-update" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.325859 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="38148c07-9662-4f0b-8285-a02633a7cd37" containerName="mariadb-account-create-update" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.326367 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.329206 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.329422 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dv6zq" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.347419 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bnpvd"] Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.479505 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-config-data\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.479609 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-combined-ca-bundle\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.479636 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-db-sync-config-data\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.479685 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wn8b\" (UniqueName: \"kubernetes.io/projected/d5916f11-436f-46f9-b76e-304aa86f91a1-kube-api-access-9wn8b\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.581513 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-config-data\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.581576 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-combined-ca-bundle\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.581609 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-db-sync-config-data\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.581662 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wn8b\" (UniqueName: \"kubernetes.io/projected/d5916f11-436f-46f9-b76e-304aa86f91a1-kube-api-access-9wn8b\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.593186 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-db-sync-config-data\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.594024 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-config-data\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.594178 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-combined-ca-bundle\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.604218 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wn8b\" (UniqueName: \"kubernetes.io/projected/d5916f11-436f-46f9-b76e-304aa86f91a1-kube-api-access-9wn8b\") pod \"glance-db-sync-bnpvd\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.658554 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bnpvd" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.808683 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.990608 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-ring-data-devices\") pod \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.991116 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gmwrr\" (UniqueName: \"kubernetes.io/projected/cb46a04b-0e73-46fb-bcdf-a670c30d5531-kube-api-access-gmwrr\") pod \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.991155 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-swiftconf\") pod \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.991212 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb46a04b-0e73-46fb-bcdf-a670c30d5531-etc-swift\") pod \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.991262 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-combined-ca-bundle\") pod \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.991300 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-scripts\") pod \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.991347 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-dispersionconf\") pod \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\" (UID: \"cb46a04b-0e73-46fb-bcdf-a670c30d5531\") " Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.991510 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "cb46a04b-0e73-46fb-bcdf-a670c30d5531" (UID: "cb46a04b-0e73-46fb-bcdf-a670c30d5531"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.991686 4804 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.992342 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cb46a04b-0e73-46fb-bcdf-a670c30d5531-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "cb46a04b-0e73-46fb-bcdf-a670c30d5531" (UID: "cb46a04b-0e73-46fb-bcdf-a670c30d5531"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:40:49 crc kubenswrapper[4804]: I0128 11:40:49.998523 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb46a04b-0e73-46fb-bcdf-a670c30d5531-kube-api-access-gmwrr" (OuterVolumeSpecName: "kube-api-access-gmwrr") pod "cb46a04b-0e73-46fb-bcdf-a670c30d5531" (UID: "cb46a04b-0e73-46fb-bcdf-a670c30d5531"). InnerVolumeSpecName "kube-api-access-gmwrr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.003003 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "cb46a04b-0e73-46fb-bcdf-a670c30d5531" (UID: "cb46a04b-0e73-46fb-bcdf-a670c30d5531"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.015599 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-scripts" (OuterVolumeSpecName: "scripts") pod "cb46a04b-0e73-46fb-bcdf-a670c30d5531" (UID: "cb46a04b-0e73-46fb-bcdf-a670c30d5531"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.019182 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb46a04b-0e73-46fb-bcdf-a670c30d5531" (UID: "cb46a04b-0e73-46fb-bcdf-a670c30d5531"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.021111 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "cb46a04b-0e73-46fb-bcdf-a670c30d5531" (UID: "cb46a04b-0e73-46fb-bcdf-a670c30d5531"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.092980 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gmwrr\" (UniqueName: \"kubernetes.io/projected/cb46a04b-0e73-46fb-bcdf-a670c30d5531-kube-api-access-gmwrr\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.093024 4804 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.093037 4804 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/cb46a04b-0e73-46fb-bcdf-a670c30d5531-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.093049 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.093061 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/cb46a04b-0e73-46fb-bcdf-a670c30d5531-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.093072 4804 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/cb46a04b-0e73-46fb-bcdf-a670c30d5531-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.236183 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-ring-rebalance-jxgc9" event={"ID":"cb46a04b-0e73-46fb-bcdf-a670c30d5531","Type":"ContainerDied","Data":"54320a0848015693cbe26d3d7b9ea2f77d1e8b3b63d64b6c373c37c52c3bf565"} Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.236229 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="54320a0848015693cbe26d3d7b9ea2f77d1e8b3b63d64b6c373c37c52c3bf565" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.236307 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-ring-rebalance-jxgc9" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.247111 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb"} Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.247163 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf"} Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.277587 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-bnpvd"] Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.764163 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-w544f" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.911809 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da587a6a-8109-4c08-8395-f4cd6b078dc7-operator-scripts\") pod \"da587a6a-8109-4c08-8395-f4cd6b078dc7\" (UID: \"da587a6a-8109-4c08-8395-f4cd6b078dc7\") " Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.911974 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8rxf4\" (UniqueName: \"kubernetes.io/projected/da587a6a-8109-4c08-8395-f4cd6b078dc7-kube-api-access-8rxf4\") pod \"da587a6a-8109-4c08-8395-f4cd6b078dc7\" (UID: \"da587a6a-8109-4c08-8395-f4cd6b078dc7\") " Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.913944 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da587a6a-8109-4c08-8395-f4cd6b078dc7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "da587a6a-8109-4c08-8395-f4cd6b078dc7" (UID: "da587a6a-8109-4c08-8395-f4cd6b078dc7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:50 crc kubenswrapper[4804]: I0128 11:40:50.916270 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da587a6a-8109-4c08-8395-f4cd6b078dc7-kube-api-access-8rxf4" (OuterVolumeSpecName: "kube-api-access-8rxf4") pod "da587a6a-8109-4c08-8395-f4cd6b078dc7" (UID: "da587a6a-8109-4c08-8395-f4cd6b078dc7"). InnerVolumeSpecName "kube-api-access-8rxf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:51 crc kubenswrapper[4804]: I0128 11:40:51.013901 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/da587a6a-8109-4c08-8395-f4cd6b078dc7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:51 crc kubenswrapper[4804]: I0128 11:40:51.014403 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8rxf4\" (UniqueName: \"kubernetes.io/projected/da587a6a-8109-4c08-8395-f4cd6b078dc7-kube-api-access-8rxf4\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:51 crc kubenswrapper[4804]: I0128 11:40:51.258098 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bnpvd" event={"ID":"d5916f11-436f-46f9-b76e-304aa86f91a1","Type":"ContainerStarted","Data":"98e4e548f770aa987da379b1ee8df638450d9e9a8748002b4fc5eb02b710f97e"} Jan 28 11:40:51 crc kubenswrapper[4804]: I0128 11:40:51.274000 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5"} Jan 28 11:40:51 crc kubenswrapper[4804]: I0128 11:40:51.274050 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a"} Jan 28 11:40:51 crc kubenswrapper[4804]: I0128 11:40:51.277207 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-w544f" event={"ID":"da587a6a-8109-4c08-8395-f4cd6b078dc7","Type":"ContainerDied","Data":"6fd7b7585096ea4903a88faf2296a1b1df346f4826181a40d086b11a9e71ea76"} Jan 28 11:40:51 crc kubenswrapper[4804]: I0128 11:40:51.277229 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6fd7b7585096ea4903a88faf2296a1b1df346f4826181a40d086b11a9e71ea76" Jan 28 11:40:51 crc kubenswrapper[4804]: I0128 11:40:51.277337 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-w544f" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.036090 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xtdr8" podUID="ec6a5a02-2cbe-421b-bcf5-54572e000f28" containerName="ovn-controller" probeResult="failure" output=< Jan 28 11:40:53 crc kubenswrapper[4804]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Jan 28 11:40:53 crc kubenswrapper[4804]: > Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.129342 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.193808 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.304746 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657"} Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.304868 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161"} Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.304897 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6"} Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.304947 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20"} Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.452479 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xtdr8-config-gnhrx"] Jan 28 11:40:53 crc kubenswrapper[4804]: E0128 11:40:53.455372 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="da587a6a-8109-4c08-8395-f4cd6b078dc7" containerName="mariadb-account-create-update" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.455408 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="da587a6a-8109-4c08-8395-f4cd6b078dc7" containerName="mariadb-account-create-update" Jan 28 11:40:53 crc kubenswrapper[4804]: E0128 11:40:53.455467 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb46a04b-0e73-46fb-bcdf-a670c30d5531" containerName="swift-ring-rebalance" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.455474 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb46a04b-0e73-46fb-bcdf-a670c30d5531" containerName="swift-ring-rebalance" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.455639 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb46a04b-0e73-46fb-bcdf-a670c30d5531" containerName="swift-ring-rebalance" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.455659 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="da587a6a-8109-4c08-8395-f4cd6b078dc7" containerName="mariadb-account-create-update" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.456260 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.458106 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.468140 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xtdr8-config-gnhrx"] Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.502711 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-log-ovn\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.502841 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run-ovn\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.502904 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.503018 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-additional-scripts\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.503056 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-scripts\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.503150 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5fxz\" (UniqueName: \"kubernetes.io/projected/f4c1d6ce-c590-416e-bca1-300d36330497-kube-api-access-n5fxz\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.605126 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-log-ovn\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.605216 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run-ovn\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.605248 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.605307 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-additional-scripts\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.605341 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-scripts\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.605394 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5fxz\" (UniqueName: \"kubernetes.io/projected/f4c1d6ce-c590-416e-bca1-300d36330497-kube-api-access-n5fxz\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.605506 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-log-ovn\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.605517 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.605511 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run-ovn\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.606271 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-additional-scripts\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.607548 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-scripts\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.633820 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5fxz\" (UniqueName: \"kubernetes.io/projected/f4c1d6ce-c590-416e-bca1-300d36330497-kube-api-access-n5fxz\") pod \"ovn-controller-xtdr8-config-gnhrx\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:53 crc kubenswrapper[4804]: I0128 11:40:53.780249 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:54 crc kubenswrapper[4804]: I0128 11:40:54.359522 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xtdr8-config-gnhrx"] Jan 28 11:40:54 crc kubenswrapper[4804]: I0128 11:40:54.974252 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:40:55 crc kubenswrapper[4804]: I0128 11:40:55.379852 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5"} Jan 28 11:40:55 crc kubenswrapper[4804]: I0128 11:40:55.379988 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3"} Jan 28 11:40:55 crc kubenswrapper[4804]: I0128 11:40:55.379998 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d"} Jan 28 11:40:55 crc kubenswrapper[4804]: I0128 11:40:55.380007 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16"} Jan 28 11:40:55 crc kubenswrapper[4804]: I0128 11:40:55.387049 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xtdr8-config-gnhrx" event={"ID":"f4c1d6ce-c590-416e-bca1-300d36330497","Type":"ContainerStarted","Data":"afc5376aa5a4fb69874f078b35845b9a204c99fa74239aab619e23b2ca9f242b"} Jan 28 11:40:55 crc kubenswrapper[4804]: I0128 11:40:55.387088 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xtdr8-config-gnhrx" event={"ID":"f4c1d6ce-c590-416e-bca1-300d36330497","Type":"ContainerStarted","Data":"b514434fb6ce4745650cc1037c08aecd64c10f4b8e573fd164165ed6eb41a03d"} Jan 28 11:40:55 crc kubenswrapper[4804]: I0128 11:40:55.751052 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.421613 4804 generic.go:334] "Generic (PLEG): container finished" podID="f4c1d6ce-c590-416e-bca1-300d36330497" containerID="afc5376aa5a4fb69874f078b35845b9a204c99fa74239aab619e23b2ca9f242b" exitCode=0 Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.422240 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xtdr8-config-gnhrx" event={"ID":"f4c1d6ce-c590-416e-bca1-300d36330497","Type":"ContainerDied","Data":"afc5376aa5a4fb69874f078b35845b9a204c99fa74239aab619e23b2ca9f242b"} Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.430919 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb"} Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.430966 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b"} Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.430980 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerStarted","Data":"43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e"} Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.488384 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-storage-0" podStartSLOduration=20.511143241 podStartE2EDuration="26.488364053s" podCreationTimestamp="2026-01-28 11:40:30 +0000 UTC" firstStartedPulling="2026-01-28 11:40:48.523177879 +0000 UTC m=+1124.318057863" lastFinishedPulling="2026-01-28 11:40:54.500398691 +0000 UTC m=+1130.295278675" observedRunningTime="2026-01-28 11:40:56.482714193 +0000 UTC m=+1132.277594177" watchObservedRunningTime="2026-01-28 11:40:56.488364053 +0000 UTC m=+1132.283244037" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.774006 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-b7zpn"] Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.775718 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.779012 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-swift-storage-0" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.788675 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-b7zpn"] Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.897176 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.897258 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.897520 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.897659 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjp6c\" (UniqueName: \"kubernetes.io/projected/46956e08-e267-4021-bf42-69a3e35826e0-kube-api-access-hjp6c\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.897690 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.897821 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-config\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.999599 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.999661 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjp6c\" (UniqueName: \"kubernetes.io/projected/46956e08-e267-4021-bf42-69a3e35826e0-kube-api-access-hjp6c\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:56 crc kubenswrapper[4804]: I0128 11:40:56.999684 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.000587 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-sb\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.000656 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-swift-storage-0\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.000680 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-config\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.000830 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.001058 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.001804 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-nb\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.001999 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-svc\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.002597 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-config\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.030838 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjp6c\" (UniqueName: \"kubernetes.io/projected/46956e08-e267-4021-bf42-69a3e35826e0-kube-api-access-hjp6c\") pod \"dnsmasq-dns-6d5b6d6b67-b7zpn\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.095368 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.205873 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.308578 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-scripts\") pod \"f4c1d6ce-c590-416e-bca1-300d36330497\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.308726 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5fxz\" (UniqueName: \"kubernetes.io/projected/f4c1d6ce-c590-416e-bca1-300d36330497-kube-api-access-n5fxz\") pod \"f4c1d6ce-c590-416e-bca1-300d36330497\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.308755 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-log-ovn\") pod \"f4c1d6ce-c590-416e-bca1-300d36330497\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.308780 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run\") pod \"f4c1d6ce-c590-416e-bca1-300d36330497\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.308824 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-additional-scripts\") pod \"f4c1d6ce-c590-416e-bca1-300d36330497\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.308911 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run-ovn\") pod \"f4c1d6ce-c590-416e-bca1-300d36330497\" (UID: \"f4c1d6ce-c590-416e-bca1-300d36330497\") " Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.309394 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "f4c1d6ce-c590-416e-bca1-300d36330497" (UID: "f4c1d6ce-c590-416e-bca1-300d36330497"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.309675 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-scripts" (OuterVolumeSpecName: "scripts") pod "f4c1d6ce-c590-416e-bca1-300d36330497" (UID: "f4c1d6ce-c590-416e-bca1-300d36330497"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.309711 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run" (OuterVolumeSpecName: "var-run") pod "f4c1d6ce-c590-416e-bca1-300d36330497" (UID: "f4c1d6ce-c590-416e-bca1-300d36330497"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.309745 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "f4c1d6ce-c590-416e-bca1-300d36330497" (UID: "f4c1d6ce-c590-416e-bca1-300d36330497"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.310020 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "f4c1d6ce-c590-416e-bca1-300d36330497" (UID: "f4c1d6ce-c590-416e-bca1-300d36330497"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.357078 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4c1d6ce-c590-416e-bca1-300d36330497-kube-api-access-n5fxz" (OuterVolumeSpecName: "kube-api-access-n5fxz") pod "f4c1d6ce-c590-416e-bca1-300d36330497" (UID: "f4c1d6ce-c590-416e-bca1-300d36330497"). InnerVolumeSpecName "kube-api-access-n5fxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.413403 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.413432 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5fxz\" (UniqueName: \"kubernetes.io/projected/f4c1d6ce-c590-416e-bca1-300d36330497-kube-api-access-n5fxz\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.413445 4804 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.413454 4804 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.413462 4804 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/f4c1d6ce-c590-416e-bca1-300d36330497-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.413472 4804 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/f4c1d6ce-c590-416e-bca1-300d36330497-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.454737 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8-config-gnhrx" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.454978 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xtdr8-config-gnhrx" event={"ID":"f4c1d6ce-c590-416e-bca1-300d36330497","Type":"ContainerDied","Data":"b514434fb6ce4745650cc1037c08aecd64c10f4b8e573fd164165ed6eb41a03d"} Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.455451 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b514434fb6ce4745650cc1037c08aecd64c10f4b8e573fd164165ed6eb41a03d" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.650016 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-n6kfg"] Jan 28 11:40:57 crc kubenswrapper[4804]: E0128 11:40:57.650454 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4c1d6ce-c590-416e-bca1-300d36330497" containerName="ovn-config" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.650471 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4c1d6ce-c590-416e-bca1-300d36330497" containerName="ovn-config" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.650620 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4c1d6ce-c590-416e-bca1-300d36330497" containerName="ovn-config" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.651194 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-n6kfg" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.657904 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8522-account-create-update-rlttq"] Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.659204 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8522-account-create-update-rlttq" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.663156 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.670638 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-n6kfg"] Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.677145 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8522-account-create-update-rlttq"] Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.721735 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12849043-1f8e-4d1f-aae3-9cbc35ea4361-operator-scripts\") pod \"barbican-8522-account-create-update-rlttq\" (UID: \"12849043-1f8e-4d1f-aae3-9cbc35ea4361\") " pod="openstack/barbican-8522-account-create-update-rlttq" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.721852 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04ea6e04-5420-4f5b-911f-cdaede8220ab-operator-scripts\") pod \"barbican-db-create-n6kfg\" (UID: \"04ea6e04-5420-4f5b-911f-cdaede8220ab\") " pod="openstack/barbican-db-create-n6kfg" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.721905 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wqx6s\" (UniqueName: \"kubernetes.io/projected/04ea6e04-5420-4f5b-911f-cdaede8220ab-kube-api-access-wqx6s\") pod \"barbican-db-create-n6kfg\" (UID: \"04ea6e04-5420-4f5b-911f-cdaede8220ab\") " pod="openstack/barbican-db-create-n6kfg" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.721969 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8zqzg\" (UniqueName: \"kubernetes.io/projected/12849043-1f8e-4d1f-aae3-9cbc35ea4361-kube-api-access-8zqzg\") pod \"barbican-8522-account-create-update-rlttq\" (UID: \"12849043-1f8e-4d1f-aae3-9cbc35ea4361\") " pod="openstack/barbican-8522-account-create-update-rlttq" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.731269 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-jqlrv"] Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.732327 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jqlrv" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.751745 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jqlrv"] Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.810866 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-b7zpn"] Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.834962 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12849043-1f8e-4d1f-aae3-9cbc35ea4361-operator-scripts\") pod \"barbican-8522-account-create-update-rlttq\" (UID: \"12849043-1f8e-4d1f-aae3-9cbc35ea4361\") " pod="openstack/barbican-8522-account-create-update-rlttq" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.835042 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc6a2a42-6519-46c6-bb24-074e5096001f-operator-scripts\") pod \"cinder-db-create-jqlrv\" (UID: \"dc6a2a42-6519-46c6-bb24-074e5096001f\") " pod="openstack/cinder-db-create-jqlrv" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.835082 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04ea6e04-5420-4f5b-911f-cdaede8220ab-operator-scripts\") pod \"barbican-db-create-n6kfg\" (UID: \"04ea6e04-5420-4f5b-911f-cdaede8220ab\") " pod="openstack/barbican-db-create-n6kfg" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.835119 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wqx6s\" (UniqueName: \"kubernetes.io/projected/04ea6e04-5420-4f5b-911f-cdaede8220ab-kube-api-access-wqx6s\") pod \"barbican-db-create-n6kfg\" (UID: \"04ea6e04-5420-4f5b-911f-cdaede8220ab\") " pod="openstack/barbican-db-create-n6kfg" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.835151 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7qzm\" (UniqueName: \"kubernetes.io/projected/dc6a2a42-6519-46c6-bb24-074e5096001f-kube-api-access-b7qzm\") pod \"cinder-db-create-jqlrv\" (UID: \"dc6a2a42-6519-46c6-bb24-074e5096001f\") " pod="openstack/cinder-db-create-jqlrv" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.835188 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8zqzg\" (UniqueName: \"kubernetes.io/projected/12849043-1f8e-4d1f-aae3-9cbc35ea4361-kube-api-access-8zqzg\") pod \"barbican-8522-account-create-update-rlttq\" (UID: \"12849043-1f8e-4d1f-aae3-9cbc35ea4361\") " pod="openstack/barbican-8522-account-create-update-rlttq" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.835783 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12849043-1f8e-4d1f-aae3-9cbc35ea4361-operator-scripts\") pod \"barbican-8522-account-create-update-rlttq\" (UID: \"12849043-1f8e-4d1f-aae3-9cbc35ea4361\") " pod="openstack/barbican-8522-account-create-update-rlttq" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.835802 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04ea6e04-5420-4f5b-911f-cdaede8220ab-operator-scripts\") pod \"barbican-db-create-n6kfg\" (UID: \"04ea6e04-5420-4f5b-911f-cdaede8220ab\") " pod="openstack/barbican-db-create-n6kfg" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.850815 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8zqzg\" (UniqueName: \"kubernetes.io/projected/12849043-1f8e-4d1f-aae3-9cbc35ea4361-kube-api-access-8zqzg\") pod \"barbican-8522-account-create-update-rlttq\" (UID: \"12849043-1f8e-4d1f-aae3-9cbc35ea4361\") " pod="openstack/barbican-8522-account-create-update-rlttq" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.854436 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wqx6s\" (UniqueName: \"kubernetes.io/projected/04ea6e04-5420-4f5b-911f-cdaede8220ab-kube-api-access-wqx6s\") pod \"barbican-db-create-n6kfg\" (UID: \"04ea6e04-5420-4f5b-911f-cdaede8220ab\") " pod="openstack/barbican-db-create-n6kfg" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.914983 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-5r69w"] Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.916210 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5r69w" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.918662 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.919107 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xcgbx" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.919371 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.920055 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.936894 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-5r69w"] Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.937943 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b7qzm\" (UniqueName: \"kubernetes.io/projected/dc6a2a42-6519-46c6-bb24-074e5096001f-kube-api-access-b7qzm\") pod \"cinder-db-create-jqlrv\" (UID: \"dc6a2a42-6519-46c6-bb24-074e5096001f\") " pod="openstack/cinder-db-create-jqlrv" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.938133 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc6a2a42-6519-46c6-bb24-074e5096001f-operator-scripts\") pod \"cinder-db-create-jqlrv\" (UID: \"dc6a2a42-6519-46c6-bb24-074e5096001f\") " pod="openstack/cinder-db-create-jqlrv" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.938790 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc6a2a42-6519-46c6-bb24-074e5096001f-operator-scripts\") pod \"cinder-db-create-jqlrv\" (UID: \"dc6a2a42-6519-46c6-bb24-074e5096001f\") " pod="openstack/cinder-db-create-jqlrv" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.968184 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-753f-account-create-update-2x2r6"] Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.979222 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-n6kfg" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.984303 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-753f-account-create-update-2x2r6" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.987195 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-753f-account-create-update-2x2r6"] Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.988197 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 28 11:40:57 crc kubenswrapper[4804]: I0128 11:40:57.989367 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8522-account-create-update-rlttq" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:57.997783 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7qzm\" (UniqueName: \"kubernetes.io/projected/dc6a2a42-6519-46c6-bb24-074e5096001f-kube-api-access-b7qzm\") pod \"cinder-db-create-jqlrv\" (UID: \"dc6a2a42-6519-46c6-bb24-074e5096001f\") " pod="openstack/cinder-db-create-jqlrv" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.040551 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htpjf\" (UniqueName: \"kubernetes.io/projected/79faecc7-1388-420a-9eee-b47d0ce87f34-kube-api-access-htpjf\") pod \"keystone-db-sync-5r69w\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " pod="openstack/keystone-db-sync-5r69w" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.040630 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-combined-ca-bundle\") pod \"keystone-db-sync-5r69w\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " pod="openstack/keystone-db-sync-5r69w" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.040672 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-config-data\") pod \"keystone-db-sync-5r69w\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " pod="openstack/keystone-db-sync-5r69w" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.066914 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-kcr62"] Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.068073 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kcr62" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.074792 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-a291-account-create-update-dlt8t"] Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.076582 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a291-account-create-update-dlt8t" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.078917 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jqlrv" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.081290 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-xtdr8" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.083350 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.091102 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-kcr62"] Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.110750 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a291-account-create-update-dlt8t"] Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.142518 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8jcp\" (UniqueName: \"kubernetes.io/projected/57723f90-020a-42b7-ad6c-49e998417f27-kube-api-access-n8jcp\") pod \"neutron-a291-account-create-update-dlt8t\" (UID: \"57723f90-020a-42b7-ad6c-49e998417f27\") " pod="openstack/neutron-a291-account-create-update-dlt8t" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.142579 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m6ct4\" (UniqueName: \"kubernetes.io/projected/518f34a2-84c4-4115-a28d-0251d0fa8064-kube-api-access-m6ct4\") pod \"neutron-db-create-kcr62\" (UID: \"518f34a2-84c4-4115-a28d-0251d0fa8064\") " pod="openstack/neutron-db-create-kcr62" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.142619 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57723f90-020a-42b7-ad6c-49e998417f27-operator-scripts\") pod \"neutron-a291-account-create-update-dlt8t\" (UID: \"57723f90-020a-42b7-ad6c-49e998417f27\") " pod="openstack/neutron-a291-account-create-update-dlt8t" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.142651 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-operator-scripts\") pod \"cinder-753f-account-create-update-2x2r6\" (UID: \"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a\") " pod="openstack/cinder-753f-account-create-update-2x2r6" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.142675 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/518f34a2-84c4-4115-a28d-0251d0fa8064-operator-scripts\") pod \"neutron-db-create-kcr62\" (UID: \"518f34a2-84c4-4115-a28d-0251d0fa8064\") " pod="openstack/neutron-db-create-kcr62" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.142703 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxhhb\" (UniqueName: \"kubernetes.io/projected/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-kube-api-access-lxhhb\") pod \"cinder-753f-account-create-update-2x2r6\" (UID: \"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a\") " pod="openstack/cinder-753f-account-create-update-2x2r6" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.142747 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htpjf\" (UniqueName: \"kubernetes.io/projected/79faecc7-1388-420a-9eee-b47d0ce87f34-kube-api-access-htpjf\") pod \"keystone-db-sync-5r69w\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " pod="openstack/keystone-db-sync-5r69w" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.142785 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-combined-ca-bundle\") pod \"keystone-db-sync-5r69w\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " pod="openstack/keystone-db-sync-5r69w" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.142813 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-config-data\") pod \"keystone-db-sync-5r69w\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " pod="openstack/keystone-db-sync-5r69w" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.147473 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-combined-ca-bundle\") pod \"keystone-db-sync-5r69w\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " pod="openstack/keystone-db-sync-5r69w" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.163247 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-config-data\") pod \"keystone-db-sync-5r69w\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " pod="openstack/keystone-db-sync-5r69w" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.164084 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htpjf\" (UniqueName: \"kubernetes.io/projected/79faecc7-1388-420a-9eee-b47d0ce87f34-kube-api-access-htpjf\") pod \"keystone-db-sync-5r69w\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " pod="openstack/keystone-db-sync-5r69w" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.243960 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-operator-scripts\") pod \"cinder-753f-account-create-update-2x2r6\" (UID: \"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a\") " pod="openstack/cinder-753f-account-create-update-2x2r6" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.244010 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/518f34a2-84c4-4115-a28d-0251d0fa8064-operator-scripts\") pod \"neutron-db-create-kcr62\" (UID: \"518f34a2-84c4-4115-a28d-0251d0fa8064\") " pod="openstack/neutron-db-create-kcr62" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.244040 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lxhhb\" (UniqueName: \"kubernetes.io/projected/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-kube-api-access-lxhhb\") pod \"cinder-753f-account-create-update-2x2r6\" (UID: \"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a\") " pod="openstack/cinder-753f-account-create-update-2x2r6" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.244191 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8jcp\" (UniqueName: \"kubernetes.io/projected/57723f90-020a-42b7-ad6c-49e998417f27-kube-api-access-n8jcp\") pod \"neutron-a291-account-create-update-dlt8t\" (UID: \"57723f90-020a-42b7-ad6c-49e998417f27\") " pod="openstack/neutron-a291-account-create-update-dlt8t" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.244213 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m6ct4\" (UniqueName: \"kubernetes.io/projected/518f34a2-84c4-4115-a28d-0251d0fa8064-kube-api-access-m6ct4\") pod \"neutron-db-create-kcr62\" (UID: \"518f34a2-84c4-4115-a28d-0251d0fa8064\") " pod="openstack/neutron-db-create-kcr62" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.244239 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57723f90-020a-42b7-ad6c-49e998417f27-operator-scripts\") pod \"neutron-a291-account-create-update-dlt8t\" (UID: \"57723f90-020a-42b7-ad6c-49e998417f27\") " pod="openstack/neutron-a291-account-create-update-dlt8t" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.244825 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-operator-scripts\") pod \"cinder-753f-account-create-update-2x2r6\" (UID: \"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a\") " pod="openstack/cinder-753f-account-create-update-2x2r6" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.245731 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/518f34a2-84c4-4115-a28d-0251d0fa8064-operator-scripts\") pod \"neutron-db-create-kcr62\" (UID: \"518f34a2-84c4-4115-a28d-0251d0fa8064\") " pod="openstack/neutron-db-create-kcr62" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.246488 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57723f90-020a-42b7-ad6c-49e998417f27-operator-scripts\") pod \"neutron-a291-account-create-update-dlt8t\" (UID: \"57723f90-020a-42b7-ad6c-49e998417f27\") " pod="openstack/neutron-a291-account-create-update-dlt8t" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.251116 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5r69w" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.263374 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lxhhb\" (UniqueName: \"kubernetes.io/projected/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-kube-api-access-lxhhb\") pod \"cinder-753f-account-create-update-2x2r6\" (UID: \"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a\") " pod="openstack/cinder-753f-account-create-update-2x2r6" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.267402 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m6ct4\" (UniqueName: \"kubernetes.io/projected/518f34a2-84c4-4115-a28d-0251d0fa8064-kube-api-access-m6ct4\") pod \"neutron-db-create-kcr62\" (UID: \"518f34a2-84c4-4115-a28d-0251d0fa8064\") " pod="openstack/neutron-db-create-kcr62" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.280862 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8jcp\" (UniqueName: \"kubernetes.io/projected/57723f90-020a-42b7-ad6c-49e998417f27-kube-api-access-n8jcp\") pod \"neutron-a291-account-create-update-dlt8t\" (UID: \"57723f90-020a-42b7-ad6c-49e998417f27\") " pod="openstack/neutron-a291-account-create-update-dlt8t" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.316392 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xtdr8-config-gnhrx"] Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.330294 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xtdr8-config-gnhrx"] Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.364375 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-753f-account-create-update-2x2r6" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.388839 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kcr62" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.407650 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a291-account-create-update-dlt8t" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.428635 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-xtdr8-config-zc8nk"] Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.429990 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.440367 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.461367 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" event={"ID":"46956e08-e267-4021-bf42-69a3e35826e0","Type":"ContainerStarted","Data":"77e9032d4b1d0896ab98b1033b917f2c0d9b702e320f4756d982cdbd575cb2f8"} Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.466412 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xtdr8-config-zc8nk"] Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.550175 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-log-ovn\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.550222 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-additional-scripts\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.550385 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9lkd\" (UniqueName: \"kubernetes.io/projected/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-kube-api-access-p9lkd\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.550428 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-scripts\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.550570 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.550679 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run-ovn\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.652605 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.652851 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run-ovn\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.652927 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-log-ovn\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.652952 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-additional-scripts\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.652999 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p9lkd\" (UniqueName: \"kubernetes.io/projected/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-kube-api-access-p9lkd\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.653018 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-scripts\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.653016 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.653085 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-log-ovn\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.653118 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run-ovn\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.654411 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-additional-scripts\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.655855 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-scripts\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.679211 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p9lkd\" (UniqueName: \"kubernetes.io/projected/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-kube-api-access-p9lkd\") pod \"ovn-controller-xtdr8-config-zc8nk\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.746013 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:40:58 crc kubenswrapper[4804]: I0128 11:40:58.948120 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4c1d6ce-c590-416e-bca1-300d36330497" path="/var/lib/kubelet/pods/f4c1d6ce-c590-416e-bca1-300d36330497/volumes" Jan 28 11:40:59 crc kubenswrapper[4804]: I0128 11:40:59.361829 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8522-account-create-update-rlttq"] Jan 28 11:40:59 crc kubenswrapper[4804]: I0128 11:40:59.470494 4804 generic.go:334] "Generic (PLEG): container finished" podID="46956e08-e267-4021-bf42-69a3e35826e0" containerID="0e30a6113bcc313e3cf69e2a658168ba99f0082887992b529bd0b556c9a4b494" exitCode=0 Jan 28 11:40:59 crc kubenswrapper[4804]: I0128 11:40:59.470737 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" event={"ID":"46956e08-e267-4021-bf42-69a3e35826e0","Type":"ContainerDied","Data":"0e30a6113bcc313e3cf69e2a658168ba99f0082887992b529bd0b556c9a4b494"} Jan 28 11:40:59 crc kubenswrapper[4804]: I0128 11:40:59.535044 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-kcr62"] Jan 28 11:40:59 crc kubenswrapper[4804]: I0128 11:40:59.549906 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-n6kfg"] Jan 28 11:40:59 crc kubenswrapper[4804]: I0128 11:40:59.560396 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-5r69w"] Jan 28 11:40:59 crc kubenswrapper[4804]: I0128 11:40:59.571799 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-jqlrv"] Jan 28 11:40:59 crc kubenswrapper[4804]: I0128 11:40:59.583093 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-753f-account-create-update-2x2r6"] Jan 28 11:40:59 crc kubenswrapper[4804]: I0128 11:40:59.591281 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-a291-account-create-update-dlt8t"] Jan 28 11:40:59 crc kubenswrapper[4804]: I0128 11:40:59.643001 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-xtdr8-config-zc8nk"] Jan 28 11:41:06 crc kubenswrapper[4804]: W0128 11:41:06.024547 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79faecc7_1388_420a_9eee_b47d0ce87f34.slice/crio-e1f18bcac4c6c36a072d92d374a23648dd2943d7e2139948bf7101c4c1a6cff4 WatchSource:0}: Error finding container e1f18bcac4c6c36a072d92d374a23648dd2943d7e2139948bf7101c4c1a6cff4: Status 404 returned error can't find the container with id e1f18bcac4c6c36a072d92d374a23648dd2943d7e2139948bf7101c4c1a6cff4 Jan 28 11:41:06 crc kubenswrapper[4804]: W0128 11:41:06.043041 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod518f34a2_84c4_4115_a28d_0251d0fa8064.slice/crio-516d57eaaff9063dd68dbb3a8b9ed031a8ba9b58b4e79c35bb862291bdb42ced WatchSource:0}: Error finding container 516d57eaaff9063dd68dbb3a8b9ed031a8ba9b58b4e79c35bb862291bdb42ced: Status 404 returned error can't find the container with id 516d57eaaff9063dd68dbb3a8b9ed031a8ba9b58b4e79c35bb862291bdb42ced Jan 28 11:41:06 crc kubenswrapper[4804]: W0128 11:41:06.049046 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbfa2fee1_7544_426c_8cbf_17e7a2b1693c.slice/crio-7fda529b88e3987c4ae4849b30d23d832271becc6d93ff3c2b38ec3458bd36e2 WatchSource:0}: Error finding container 7fda529b88e3987c4ae4849b30d23d832271becc6d93ff3c2b38ec3458bd36e2: Status 404 returned error can't find the container with id 7fda529b88e3987c4ae4849b30d23d832271becc6d93ff3c2b38ec3458bd36e2 Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.052399 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.052406 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.052405 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.575256 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8522-account-create-update-rlttq" event={"ID":"12849043-1f8e-4d1f-aae3-9cbc35ea4361","Type":"ContainerStarted","Data":"6ce17aece748b9da79e3085fe6d476a5deab47316ec4672ba0cbe650d2deca37"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.575581 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8522-account-create-update-rlttq" event={"ID":"12849043-1f8e-4d1f-aae3-9cbc35ea4361","Type":"ContainerStarted","Data":"550e26edd6ee8229306ffc708faae50e44129550e0ff20f7f29fa20dce60c760"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.584220 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5r69w" event={"ID":"79faecc7-1388-420a-9eee-b47d0ce87f34","Type":"ContainerStarted","Data":"e1f18bcac4c6c36a072d92d374a23648dd2943d7e2139948bf7101c4c1a6cff4"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.598457 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" event={"ID":"46956e08-e267-4021-bf42-69a3e35826e0","Type":"ContainerStarted","Data":"300764162d368ffd0de5fe83665369a6aa0f7d774b37e1517a5bc0a601f70ad4"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.598716 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.602407 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-8522-account-create-update-rlttq" podStartSLOduration=9.602391351 podStartE2EDuration="9.602391351s" podCreationTimestamp="2026-01-28 11:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:06.596026828 +0000 UTC m=+1142.390906812" watchObservedRunningTime="2026-01-28 11:41:06.602391351 +0000 UTC m=+1142.397271335" Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.602755 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a291-account-create-update-dlt8t" event={"ID":"57723f90-020a-42b7-ad6c-49e998417f27","Type":"ContainerStarted","Data":"eb8aeef081bed9fc3291d5cfeded1565dd1b1b9b2083d0292898d1582434080f"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.602816 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a291-account-create-update-dlt8t" event={"ID":"57723f90-020a-42b7-ad6c-49e998417f27","Type":"ContainerStarted","Data":"0d5a060f43163338a8ede6064d4710fa13e9db3e35b949179a6e65fb27dffc89"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.606647 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-753f-account-create-update-2x2r6" event={"ID":"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a","Type":"ContainerStarted","Data":"17b7bc7812de15b0ba6dad22d3ba3bb61255869891da2c8a992a0d46bd5333d8"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.606688 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-753f-account-create-update-2x2r6" event={"ID":"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a","Type":"ContainerStarted","Data":"924dc54cfa60a8f32123f92837e50932d9f12563881b5648d2cb23e671fafa38"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.608392 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xtdr8-config-zc8nk" event={"ID":"bfa2fee1-7544-426c-8cbf-17e7a2b1693c","Type":"ContainerStarted","Data":"7fda529b88e3987c4ae4849b30d23d832271becc6d93ff3c2b38ec3458bd36e2"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.614457 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-n6kfg" event={"ID":"04ea6e04-5420-4f5b-911f-cdaede8220ab","Type":"ContainerStarted","Data":"0e142e02c8a274046814a6325bfd4965bb106ee5efa7e215372b93e33be734e4"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.614498 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-n6kfg" event={"ID":"04ea6e04-5420-4f5b-911f-cdaede8220ab","Type":"ContainerStarted","Data":"afaf02dd74d091d615efce3e75faa15a0eb668080b3859f1c4c081a5ec9ff9ff"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.618231 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jqlrv" event={"ID":"dc6a2a42-6519-46c6-bb24-074e5096001f","Type":"ContainerStarted","Data":"350f3ad47814ad13668216a271a72da43f7b115b973ca0e4f205bd9b83981f82"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.618281 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jqlrv" event={"ID":"dc6a2a42-6519-46c6-bb24-074e5096001f","Type":"ContainerStarted","Data":"39321d26e0256fe4a7dea0f7638803063c75fe53025761d0a61456609991a4b1"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.628527 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kcr62" event={"ID":"518f34a2-84c4-4115-a28d-0251d0fa8064","Type":"ContainerStarted","Data":"90654b28f7b1bc46ccc040db22917c371a0f4ddcc12c4c2ea186a6c9f6f7e0b1"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.628576 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kcr62" event={"ID":"518f34a2-84c4-4115-a28d-0251d0fa8064","Type":"ContainerStarted","Data":"516d57eaaff9063dd68dbb3a8b9ed031a8ba9b58b4e79c35bb862291bdb42ced"} Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.646929 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" podStartSLOduration=10.646912888 podStartE2EDuration="10.646912888s" podCreationTimestamp="2026-01-28 11:40:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:06.623258434 +0000 UTC m=+1142.418138428" watchObservedRunningTime="2026-01-28 11:41:06.646912888 +0000 UTC m=+1142.441792872" Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.651131 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-xtdr8-config-zc8nk" podStartSLOduration=8.651114641 podStartE2EDuration="8.651114641s" podCreationTimestamp="2026-01-28 11:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:06.646241456 +0000 UTC m=+1142.441121440" watchObservedRunningTime="2026-01-28 11:41:06.651114641 +0000 UTC m=+1142.445994625" Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.666081 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-jqlrv" podStartSLOduration=9.666063737 podStartE2EDuration="9.666063737s" podCreationTimestamp="2026-01-28 11:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:06.662361149 +0000 UTC m=+1142.457241143" watchObservedRunningTime="2026-01-28 11:41:06.666063737 +0000 UTC m=+1142.460943711" Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.685046 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-a291-account-create-update-dlt8t" podStartSLOduration=8.68502747 podStartE2EDuration="8.68502747s" podCreationTimestamp="2026-01-28 11:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:06.680565999 +0000 UTC m=+1142.475445983" watchObservedRunningTime="2026-01-28 11:41:06.68502747 +0000 UTC m=+1142.479907464" Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.713823 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-753f-account-create-update-2x2r6" podStartSLOduration=9.713806217 podStartE2EDuration="9.713806217s" podCreationTimestamp="2026-01-28 11:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:06.698557061 +0000 UTC m=+1142.493437055" watchObservedRunningTime="2026-01-28 11:41:06.713806217 +0000 UTC m=+1142.508686201" Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.724067 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-create-n6kfg" podStartSLOduration=9.724049152 podStartE2EDuration="9.724049152s" podCreationTimestamp="2026-01-28 11:40:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:06.720781978 +0000 UTC m=+1142.515661952" watchObservedRunningTime="2026-01-28 11:41:06.724049152 +0000 UTC m=+1142.518929136" Jan 28 11:41:06 crc kubenswrapper[4804]: I0128 11:41:06.740097 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-create-kcr62" podStartSLOduration=8.740082543 podStartE2EDuration="8.740082543s" podCreationTimestamp="2026-01-28 11:40:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:06.739331879 +0000 UTC m=+1142.534211873" watchObservedRunningTime="2026-01-28 11:41:06.740082543 +0000 UTC m=+1142.534962527" Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.638429 4804 generic.go:334] "Generic (PLEG): container finished" podID="04ea6e04-5420-4f5b-911f-cdaede8220ab" containerID="0e142e02c8a274046814a6325bfd4965bb106ee5efa7e215372b93e33be734e4" exitCode=0 Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.638564 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-n6kfg" event={"ID":"04ea6e04-5420-4f5b-911f-cdaede8220ab","Type":"ContainerDied","Data":"0e142e02c8a274046814a6325bfd4965bb106ee5efa7e215372b93e33be734e4"} Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.640608 4804 generic.go:334] "Generic (PLEG): container finished" podID="57723f90-020a-42b7-ad6c-49e998417f27" containerID="eb8aeef081bed9fc3291d5cfeded1565dd1b1b9b2083d0292898d1582434080f" exitCode=0 Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.640670 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a291-account-create-update-dlt8t" event={"ID":"57723f90-020a-42b7-ad6c-49e998417f27","Type":"ContainerDied","Data":"eb8aeef081bed9fc3291d5cfeded1565dd1b1b9b2083d0292898d1582434080f"} Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.642078 4804 generic.go:334] "Generic (PLEG): container finished" podID="dc6a2a42-6519-46c6-bb24-074e5096001f" containerID="350f3ad47814ad13668216a271a72da43f7b115b973ca0e4f205bd9b83981f82" exitCode=0 Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.642132 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jqlrv" event={"ID":"dc6a2a42-6519-46c6-bb24-074e5096001f","Type":"ContainerDied","Data":"350f3ad47814ad13668216a271a72da43f7b115b973ca0e4f205bd9b83981f82"} Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.643956 4804 generic.go:334] "Generic (PLEG): container finished" podID="518f34a2-84c4-4115-a28d-0251d0fa8064" containerID="90654b28f7b1bc46ccc040db22917c371a0f4ddcc12c4c2ea186a6c9f6f7e0b1" exitCode=0 Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.644007 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kcr62" event={"ID":"518f34a2-84c4-4115-a28d-0251d0fa8064","Type":"ContainerDied","Data":"90654b28f7b1bc46ccc040db22917c371a0f4ddcc12c4c2ea186a6c9f6f7e0b1"} Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.647732 4804 generic.go:334] "Generic (PLEG): container finished" podID="ba69153d-cb1a-4a90-b52a-19ecc0f5b77a" containerID="17b7bc7812de15b0ba6dad22d3ba3bb61255869891da2c8a992a0d46bd5333d8" exitCode=0 Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.647807 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-753f-account-create-update-2x2r6" event={"ID":"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a","Type":"ContainerDied","Data":"17b7bc7812de15b0ba6dad22d3ba3bb61255869891da2c8a992a0d46bd5333d8"} Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.649803 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bnpvd" event={"ID":"d5916f11-436f-46f9-b76e-304aa86f91a1","Type":"ContainerStarted","Data":"630d245e2b53140749f6a43e742aa23a22cf07e20dff45a1938f861c8866cefa"} Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.656276 4804 generic.go:334] "Generic (PLEG): container finished" podID="12849043-1f8e-4d1f-aae3-9cbc35ea4361" containerID="6ce17aece748b9da79e3085fe6d476a5deab47316ec4672ba0cbe650d2deca37" exitCode=0 Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.656390 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8522-account-create-update-rlttq" event={"ID":"12849043-1f8e-4d1f-aae3-9cbc35ea4361","Type":"ContainerDied","Data":"6ce17aece748b9da79e3085fe6d476a5deab47316ec4672ba0cbe650d2deca37"} Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.661176 4804 generic.go:334] "Generic (PLEG): container finished" podID="bfa2fee1-7544-426c-8cbf-17e7a2b1693c" containerID="91137eb6aeea940f4af2b3e77f249fa514f8d6f12484bb39c0b7af92b6cead6f" exitCode=0 Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.661858 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xtdr8-config-zc8nk" event={"ID":"bfa2fee1-7544-426c-8cbf-17e7a2b1693c","Type":"ContainerDied","Data":"91137eb6aeea940f4af2b3e77f249fa514f8d6f12484bb39c0b7af92b6cead6f"} Jan 28 11:41:07 crc kubenswrapper[4804]: I0128 11:41:07.769682 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-bnpvd" podStartSLOduration=2.9189039660000002 podStartE2EDuration="18.769664472s" podCreationTimestamp="2026-01-28 11:40:49 +0000 UTC" firstStartedPulling="2026-01-28 11:40:50.321062581 +0000 UTC m=+1126.115942565" lastFinishedPulling="2026-01-28 11:41:06.171823087 +0000 UTC m=+1141.966703071" observedRunningTime="2026-01-28 11:41:07.768472254 +0000 UTC m=+1143.563352238" watchObservedRunningTime="2026-01-28 11:41:07.769664472 +0000 UTC m=+1143.564544456" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.097786 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.156183 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-smhkb"] Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.156416 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" podUID="b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" containerName="dnsmasq-dns" containerID="cri-o://a48f67c8adf2aa181768d9a9401b24e93ffd4b8affd530951dafed718efcc454" gracePeriod=10 Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.395392 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-n6kfg" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.402779 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jqlrv" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.406898 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8522-account-create-update-rlttq" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.413009 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.440067 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kcr62" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.460258 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a291-account-create-update-dlt8t" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.460616 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-753f-account-create-update-2x2r6" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.544649 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m6ct4\" (UniqueName: \"kubernetes.io/projected/518f34a2-84c4-4115-a28d-0251d0fa8064-kube-api-access-m6ct4\") pod \"518f34a2-84c4-4115-a28d-0251d0fa8064\" (UID: \"518f34a2-84c4-4115-a28d-0251d0fa8064\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.544702 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-additional-scripts\") pod \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.544730 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/518f34a2-84c4-4115-a28d-0251d0fa8064-operator-scripts\") pod \"518f34a2-84c4-4115-a28d-0251d0fa8064\" (UID: \"518f34a2-84c4-4115-a28d-0251d0fa8064\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.544792 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8zqzg\" (UniqueName: \"kubernetes.io/projected/12849043-1f8e-4d1f-aae3-9cbc35ea4361-kube-api-access-8zqzg\") pod \"12849043-1f8e-4d1f-aae3-9cbc35ea4361\" (UID: \"12849043-1f8e-4d1f-aae3-9cbc35ea4361\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.544855 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqx6s\" (UniqueName: \"kubernetes.io/projected/04ea6e04-5420-4f5b-911f-cdaede8220ab-kube-api-access-wqx6s\") pod \"04ea6e04-5420-4f5b-911f-cdaede8220ab\" (UID: \"04ea6e04-5420-4f5b-911f-cdaede8220ab\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.544892 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-scripts\") pod \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.544929 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-log-ovn\") pod \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.544949 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run-ovn\") pod \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.545007 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9lkd\" (UniqueName: \"kubernetes.io/projected/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-kube-api-access-p9lkd\") pod \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.545024 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12849043-1f8e-4d1f-aae3-9cbc35ea4361-operator-scripts\") pod \"12849043-1f8e-4d1f-aae3-9cbc35ea4361\" (UID: \"12849043-1f8e-4d1f-aae3-9cbc35ea4361\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.545057 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc6a2a42-6519-46c6-bb24-074e5096001f-operator-scripts\") pod \"dc6a2a42-6519-46c6-bb24-074e5096001f\" (UID: \"dc6a2a42-6519-46c6-bb24-074e5096001f\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.545092 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7qzm\" (UniqueName: \"kubernetes.io/projected/dc6a2a42-6519-46c6-bb24-074e5096001f-kube-api-access-b7qzm\") pod \"dc6a2a42-6519-46c6-bb24-074e5096001f\" (UID: \"dc6a2a42-6519-46c6-bb24-074e5096001f\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.545117 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04ea6e04-5420-4f5b-911f-cdaede8220ab-operator-scripts\") pod \"04ea6e04-5420-4f5b-911f-cdaede8220ab\" (UID: \"04ea6e04-5420-4f5b-911f-cdaede8220ab\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.545143 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run\") pod \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\" (UID: \"bfa2fee1-7544-426c-8cbf-17e7a2b1693c\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.545430 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "bfa2fee1-7544-426c-8cbf-17e7a2b1693c" (UID: "bfa2fee1-7544-426c-8cbf-17e7a2b1693c"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.545498 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run" (OuterVolumeSpecName: "var-run") pod "bfa2fee1-7544-426c-8cbf-17e7a2b1693c" (UID: "bfa2fee1-7544-426c-8cbf-17e7a2b1693c"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.545559 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "bfa2fee1-7544-426c-8cbf-17e7a2b1693c" (UID: "bfa2fee1-7544-426c-8cbf-17e7a2b1693c"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.546745 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12849043-1f8e-4d1f-aae3-9cbc35ea4361-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12849043-1f8e-4d1f-aae3-9cbc35ea4361" (UID: "12849043-1f8e-4d1f-aae3-9cbc35ea4361"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.547438 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc6a2a42-6519-46c6-bb24-074e5096001f-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "dc6a2a42-6519-46c6-bb24-074e5096001f" (UID: "dc6a2a42-6519-46c6-bb24-074e5096001f"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.548073 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "bfa2fee1-7544-426c-8cbf-17e7a2b1693c" (UID: "bfa2fee1-7544-426c-8cbf-17e7a2b1693c"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.553016 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/518f34a2-84c4-4115-a28d-0251d0fa8064-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "518f34a2-84c4-4115-a28d-0251d0fa8064" (UID: "518f34a2-84c4-4115-a28d-0251d0fa8064"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.553754 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-scripts" (OuterVolumeSpecName: "scripts") pod "bfa2fee1-7544-426c-8cbf-17e7a2b1693c" (UID: "bfa2fee1-7544-426c-8cbf-17e7a2b1693c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.554722 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/04ea6e04-5420-4f5b-911f-cdaede8220ab-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "04ea6e04-5420-4f5b-911f-cdaede8220ab" (UID: "04ea6e04-5420-4f5b-911f-cdaede8220ab"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.558997 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/518f34a2-84c4-4115-a28d-0251d0fa8064-kube-api-access-m6ct4" (OuterVolumeSpecName: "kube-api-access-m6ct4") pod "518f34a2-84c4-4115-a28d-0251d0fa8064" (UID: "518f34a2-84c4-4115-a28d-0251d0fa8064"). InnerVolumeSpecName "kube-api-access-m6ct4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.574070 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04ea6e04-5420-4f5b-911f-cdaede8220ab-kube-api-access-wqx6s" (OuterVolumeSpecName: "kube-api-access-wqx6s") pod "04ea6e04-5420-4f5b-911f-cdaede8220ab" (UID: "04ea6e04-5420-4f5b-911f-cdaede8220ab"). InnerVolumeSpecName "kube-api-access-wqx6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.581557 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc6a2a42-6519-46c6-bb24-074e5096001f-kube-api-access-b7qzm" (OuterVolumeSpecName: "kube-api-access-b7qzm") pod "dc6a2a42-6519-46c6-bb24-074e5096001f" (UID: "dc6a2a42-6519-46c6-bb24-074e5096001f"). InnerVolumeSpecName "kube-api-access-b7qzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.581911 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-kube-api-access-p9lkd" (OuterVolumeSpecName: "kube-api-access-p9lkd") pod "bfa2fee1-7544-426c-8cbf-17e7a2b1693c" (UID: "bfa2fee1-7544-426c-8cbf-17e7a2b1693c"). InnerVolumeSpecName "kube-api-access-p9lkd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.582417 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.582531 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.582644 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.582954 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12849043-1f8e-4d1f-aae3-9cbc35ea4361-kube-api-access-8zqzg" (OuterVolumeSpecName: "kube-api-access-8zqzg") pod "12849043-1f8e-4d1f-aae3-9cbc35ea4361" (UID: "12849043-1f8e-4d1f-aae3-9cbc35ea4361"). InnerVolumeSpecName "kube-api-access-8zqzg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.584249 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ed6af6b086af0e36078ceaad545a02650a81d6b24e2afd021938bf20fba0d1ad"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.584412 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://ed6af6b086af0e36078ceaad545a02650a81d6b24e2afd021938bf20fba0d1ad" gracePeriod=600 Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.647422 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57723f90-020a-42b7-ad6c-49e998417f27-operator-scripts\") pod \"57723f90-020a-42b7-ad6c-49e998417f27\" (UID: \"57723f90-020a-42b7-ad6c-49e998417f27\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.647531 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lxhhb\" (UniqueName: \"kubernetes.io/projected/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-kube-api-access-lxhhb\") pod \"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a\" (UID: \"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.647622 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-operator-scripts\") pod \"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a\" (UID: \"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.647755 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n8jcp\" (UniqueName: \"kubernetes.io/projected/57723f90-020a-42b7-ad6c-49e998417f27-kube-api-access-n8jcp\") pod \"57723f90-020a-42b7-ad6c-49e998417f27\" (UID: \"57723f90-020a-42b7-ad6c-49e998417f27\") " Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.648446 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/dc6a2a42-6519-46c6-bb24-074e5096001f-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.648613 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b7qzm\" (UniqueName: \"kubernetes.io/projected/dc6a2a42-6519-46c6-bb24-074e5096001f-kube-api-access-b7qzm\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.648730 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/04ea6e04-5420-4f5b-911f-cdaede8220ab-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.648826 4804 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.648940 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m6ct4\" (UniqueName: \"kubernetes.io/projected/518f34a2-84c4-4115-a28d-0251d0fa8064-kube-api-access-m6ct4\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.649065 4804 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-additional-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.649151 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/518f34a2-84c4-4115-a28d-0251d0fa8064-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.649239 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8zqzg\" (UniqueName: \"kubernetes.io/projected/12849043-1f8e-4d1f-aae3-9cbc35ea4361-kube-api-access-8zqzg\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.649337 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqx6s\" (UniqueName: \"kubernetes.io/projected/04ea6e04-5420-4f5b-911f-cdaede8220ab-kube-api-access-wqx6s\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.649444 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.649550 4804 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.649640 4804 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.649715 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p9lkd\" (UniqueName: \"kubernetes.io/projected/bfa2fee1-7544-426c-8cbf-17e7a2b1693c-kube-api-access-p9lkd\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.649810 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12849043-1f8e-4d1f-aae3-9cbc35ea4361-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.650270 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "ba69153d-cb1a-4a90-b52a-19ecc0f5b77a" (UID: "ba69153d-cb1a-4a90-b52a-19ecc0f5b77a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.651131 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57723f90-020a-42b7-ad6c-49e998417f27-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "57723f90-020a-42b7-ad6c-49e998417f27" (UID: "57723f90-020a-42b7-ad6c-49e998417f27"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.659113 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-kube-api-access-lxhhb" (OuterVolumeSpecName: "kube-api-access-lxhhb") pod "ba69153d-cb1a-4a90-b52a-19ecc0f5b77a" (UID: "ba69153d-cb1a-4a90-b52a-19ecc0f5b77a"). InnerVolumeSpecName "kube-api-access-lxhhb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.665145 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57723f90-020a-42b7-ad6c-49e998417f27-kube-api-access-n8jcp" (OuterVolumeSpecName: "kube-api-access-n8jcp") pod "57723f90-020a-42b7-ad6c-49e998417f27" (UID: "57723f90-020a-42b7-ad6c-49e998417f27"). InnerVolumeSpecName "kube-api-access-n8jcp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.723324 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-753f-account-create-update-2x2r6" event={"ID":"ba69153d-cb1a-4a90-b52a-19ecc0f5b77a","Type":"ContainerDied","Data":"924dc54cfa60a8f32123f92837e50932d9f12563881b5648d2cb23e671fafa38"} Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.723969 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="924dc54cfa60a8f32123f92837e50932d9f12563881b5648d2cb23e671fafa38" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.723545 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-753f-account-create-update-2x2r6" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.725077 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8522-account-create-update-rlttq" event={"ID":"12849043-1f8e-4d1f-aae3-9cbc35ea4361","Type":"ContainerDied","Data":"550e26edd6ee8229306ffc708faae50e44129550e0ff20f7f29fa20dce60c760"} Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.725112 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="550e26edd6ee8229306ffc708faae50e44129550e0ff20f7f29fa20dce60c760" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.725167 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8522-account-create-update-rlttq" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.733237 4804 generic.go:334] "Generic (PLEG): container finished" podID="b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" containerID="a48f67c8adf2aa181768d9a9401b24e93ffd4b8affd530951dafed718efcc454" exitCode=0 Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.733295 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" event={"ID":"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1","Type":"ContainerDied","Data":"a48f67c8adf2aa181768d9a9401b24e93ffd4b8affd530951dafed718efcc454"} Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.734684 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xtdr8-config-zc8nk" event={"ID":"bfa2fee1-7544-426c-8cbf-17e7a2b1693c","Type":"ContainerDied","Data":"7fda529b88e3987c4ae4849b30d23d832271becc6d93ff3c2b38ec3458bd36e2"} Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.734704 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fda529b88e3987c4ae4849b30d23d832271becc6d93ff3c2b38ec3458bd36e2" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.734709 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8-config-zc8nk" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.735764 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-n6kfg" event={"ID":"04ea6e04-5420-4f5b-911f-cdaede8220ab","Type":"ContainerDied","Data":"afaf02dd74d091d615efce3e75faa15a0eb668080b3859f1c4c081a5ec9ff9ff"} Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.735809 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afaf02dd74d091d615efce3e75faa15a0eb668080b3859f1c4c081a5ec9ff9ff" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.735896 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-n6kfg" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.751263 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n8jcp\" (UniqueName: \"kubernetes.io/projected/57723f90-020a-42b7-ad6c-49e998417f27-kube-api-access-n8jcp\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.751287 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/57723f90-020a-42b7-ad6c-49e998417f27-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.751301 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lxhhb\" (UniqueName: \"kubernetes.io/projected/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-kube-api-access-lxhhb\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.751311 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.753131 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-a291-account-create-update-dlt8t" event={"ID":"57723f90-020a-42b7-ad6c-49e998417f27","Type":"ContainerDied","Data":"0d5a060f43163338a8ede6064d4710fa13e9db3e35b949179a6e65fb27dffc89"} Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.753155 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0d5a060f43163338a8ede6064d4710fa13e9db3e35b949179a6e65fb27dffc89" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.753209 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-a291-account-create-update-dlt8t" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.757443 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-jqlrv" event={"ID":"dc6a2a42-6519-46c6-bb24-074e5096001f","Type":"ContainerDied","Data":"39321d26e0256fe4a7dea0f7638803063c75fe53025761d0a61456609991a4b1"} Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.757476 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="39321d26e0256fe4a7dea0f7638803063c75fe53025761d0a61456609991a4b1" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.757542 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-jqlrv" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.764856 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-kcr62" event={"ID":"518f34a2-84c4-4115-a28d-0251d0fa8064","Type":"ContainerDied","Data":"516d57eaaff9063dd68dbb3a8b9ed031a8ba9b58b4e79c35bb862291bdb42ced"} Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.764902 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="516d57eaaff9063dd68dbb3a8b9ed031a8ba9b58b4e79c35bb862291bdb42ced" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.764953 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-kcr62" Jan 28 11:41:12 crc kubenswrapper[4804]: I0128 11:41:12.974685 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.158833 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-nb\") pod \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.159200 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-config\") pod \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.159959 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-sb\") pod \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.160098 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-dns-svc\") pod \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.160325 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xzbk7\" (UniqueName: \"kubernetes.io/projected/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-kube-api-access-xzbk7\") pod \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\" (UID: \"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1\") " Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.171218 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-kube-api-access-xzbk7" (OuterVolumeSpecName: "kube-api-access-xzbk7") pod "b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" (UID: "b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1"). InnerVolumeSpecName "kube-api-access-xzbk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.228307 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" (UID: "b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.230111 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" (UID: "b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.232398 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-config" (OuterVolumeSpecName: "config") pod "b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" (UID: "b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.234487 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" (UID: "b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.268965 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xzbk7\" (UniqueName: \"kubernetes.io/projected/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-kube-api-access-xzbk7\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.269001 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.269014 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.269024 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.269035 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.554615 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xtdr8-config-zc8nk"] Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.564359 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xtdr8-config-zc8nk"] Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.774216 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" event={"ID":"b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1","Type":"ContainerDied","Data":"99851c0d89d123f60d87fe5e7b4fa11b90a206a967c2a2ccd24c03d723ee66ce"} Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.774232 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-b8fbc5445-smhkb" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.774569 4804 scope.go:117] "RemoveContainer" containerID="a48f67c8adf2aa181768d9a9401b24e93ffd4b8affd530951dafed718efcc454" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.775981 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5r69w" event={"ID":"79faecc7-1388-420a-9eee-b47d0ce87f34","Type":"ContainerStarted","Data":"9ebbd370fba6d4ae4e403a102d6071f40119646995ef2452c9e5a36cd8033a5d"} Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.780062 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="ed6af6b086af0e36078ceaad545a02650a81d6b24e2afd021938bf20fba0d1ad" exitCode=0 Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.780100 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"ed6af6b086af0e36078ceaad545a02650a81d6b24e2afd021938bf20fba0d1ad"} Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.780311 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"4bffdd4d5a4ad0d46a47b95458a7c8bdaf05a4c4019b6b412dce10eb63d37e95"} Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.794220 4804 scope.go:117] "RemoveContainer" containerID="cc41ce863945bdc29f63769a99ae0d6dadc7d7ef12a25abcef8a64fe330fdd73" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.801658 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-5r69w" podStartSLOduration=9.740742653 podStartE2EDuration="16.801641117s" podCreationTimestamp="2026-01-28 11:40:57 +0000 UTC" firstStartedPulling="2026-01-28 11:41:06.026926054 +0000 UTC m=+1141.821806038" lastFinishedPulling="2026-01-28 11:41:13.087824518 +0000 UTC m=+1148.882704502" observedRunningTime="2026-01-28 11:41:13.795752639 +0000 UTC m=+1149.590632623" watchObservedRunningTime="2026-01-28 11:41:13.801641117 +0000 UTC m=+1149.596521101" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.819154 4804 scope.go:117] "RemoveContainer" containerID="e2d1117c737baf6cd27ef1229c3435bfc59febfb941c2b84b434e736df46abc8" Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.824802 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-smhkb"] Jan 28 11:41:13 crc kubenswrapper[4804]: I0128 11:41:13.830839 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-b8fbc5445-smhkb"] Jan 28 11:41:14 crc kubenswrapper[4804]: I0128 11:41:14.927035 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" path="/var/lib/kubelet/pods/b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1/volumes" Jan 28 11:41:14 crc kubenswrapper[4804]: I0128 11:41:14.928632 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bfa2fee1-7544-426c-8cbf-17e7a2b1693c" path="/var/lib/kubelet/pods/bfa2fee1-7544-426c-8cbf-17e7a2b1693c/volumes" Jan 28 11:41:17 crc kubenswrapper[4804]: I0128 11:41:17.826472 4804 generic.go:334] "Generic (PLEG): container finished" podID="79faecc7-1388-420a-9eee-b47d0ce87f34" containerID="9ebbd370fba6d4ae4e403a102d6071f40119646995ef2452c9e5a36cd8033a5d" exitCode=0 Jan 28 11:41:17 crc kubenswrapper[4804]: I0128 11:41:17.826532 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5r69w" event={"ID":"79faecc7-1388-420a-9eee-b47d0ce87f34","Type":"ContainerDied","Data":"9ebbd370fba6d4ae4e403a102d6071f40119646995ef2452c9e5a36cd8033a5d"} Jan 28 11:41:18 crc kubenswrapper[4804]: I0128 11:41:18.836997 4804 generic.go:334] "Generic (PLEG): container finished" podID="d5916f11-436f-46f9-b76e-304aa86f91a1" containerID="630d245e2b53140749f6a43e742aa23a22cf07e20dff45a1938f861c8866cefa" exitCode=0 Jan 28 11:41:18 crc kubenswrapper[4804]: I0128 11:41:18.837084 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bnpvd" event={"ID":"d5916f11-436f-46f9-b76e-304aa86f91a1","Type":"ContainerDied","Data":"630d245e2b53140749f6a43e742aa23a22cf07e20dff45a1938f861c8866cefa"} Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.133686 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5r69w" Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.273002 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-config-data\") pod \"79faecc7-1388-420a-9eee-b47d0ce87f34\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.273193 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htpjf\" (UniqueName: \"kubernetes.io/projected/79faecc7-1388-420a-9eee-b47d0ce87f34-kube-api-access-htpjf\") pod \"79faecc7-1388-420a-9eee-b47d0ce87f34\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.273267 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-combined-ca-bundle\") pod \"79faecc7-1388-420a-9eee-b47d0ce87f34\" (UID: \"79faecc7-1388-420a-9eee-b47d0ce87f34\") " Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.293093 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79faecc7-1388-420a-9eee-b47d0ce87f34-kube-api-access-htpjf" (OuterVolumeSpecName: "kube-api-access-htpjf") pod "79faecc7-1388-420a-9eee-b47d0ce87f34" (UID: "79faecc7-1388-420a-9eee-b47d0ce87f34"). InnerVolumeSpecName "kube-api-access-htpjf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.309569 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "79faecc7-1388-420a-9eee-b47d0ce87f34" (UID: "79faecc7-1388-420a-9eee-b47d0ce87f34"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.325390 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-config-data" (OuterVolumeSpecName: "config-data") pod "79faecc7-1388-420a-9eee-b47d0ce87f34" (UID: "79faecc7-1388-420a-9eee-b47d0ce87f34"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.375294 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htpjf\" (UniqueName: \"kubernetes.io/projected/79faecc7-1388-420a-9eee-b47d0ce87f34-kube-api-access-htpjf\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.375554 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.375632 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/79faecc7-1388-420a-9eee-b47d0ce87f34-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.846800 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-5r69w" event={"ID":"79faecc7-1388-420a-9eee-b47d0ce87f34","Type":"ContainerDied","Data":"e1f18bcac4c6c36a072d92d374a23648dd2943d7e2139948bf7101c4c1a6cff4"} Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.846852 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1f18bcac4c6c36a072d92d374a23648dd2943d7e2139948bf7101c4c1a6cff4" Jan 28 11:41:19 crc kubenswrapper[4804]: I0128 11:41:19.846922 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-5r69w" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.179810 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-bxbzj"] Jan 28 11:41:20 crc kubenswrapper[4804]: E0128 11:41:20.180238 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc6a2a42-6519-46c6-bb24-074e5096001f" containerName="mariadb-database-create" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180250 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc6a2a42-6519-46c6-bb24-074e5096001f" containerName="mariadb-database-create" Jan 28 11:41:20 crc kubenswrapper[4804]: E0128 11:41:20.180260 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04ea6e04-5420-4f5b-911f-cdaede8220ab" containerName="mariadb-database-create" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180266 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="04ea6e04-5420-4f5b-911f-cdaede8220ab" containerName="mariadb-database-create" Jan 28 11:41:20 crc kubenswrapper[4804]: E0128 11:41:20.180277 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" containerName="dnsmasq-dns" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180283 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" containerName="dnsmasq-dns" Jan 28 11:41:20 crc kubenswrapper[4804]: E0128 11:41:20.180291 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12849043-1f8e-4d1f-aae3-9cbc35ea4361" containerName="mariadb-account-create-update" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180297 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="12849043-1f8e-4d1f-aae3-9cbc35ea4361" containerName="mariadb-account-create-update" Jan 28 11:41:20 crc kubenswrapper[4804]: E0128 11:41:20.180310 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" containerName="init" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180315 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" containerName="init" Jan 28 11:41:20 crc kubenswrapper[4804]: E0128 11:41:20.180326 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="518f34a2-84c4-4115-a28d-0251d0fa8064" containerName="mariadb-database-create" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180332 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="518f34a2-84c4-4115-a28d-0251d0fa8064" containerName="mariadb-database-create" Jan 28 11:41:20 crc kubenswrapper[4804]: E0128 11:41:20.180342 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ba69153d-cb1a-4a90-b52a-19ecc0f5b77a" containerName="mariadb-account-create-update" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180348 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="ba69153d-cb1a-4a90-b52a-19ecc0f5b77a" containerName="mariadb-account-create-update" Jan 28 11:41:20 crc kubenswrapper[4804]: E0128 11:41:20.180368 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="79faecc7-1388-420a-9eee-b47d0ce87f34" containerName="keystone-db-sync" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180374 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="79faecc7-1388-420a-9eee-b47d0ce87f34" containerName="keystone-db-sync" Jan 28 11:41:20 crc kubenswrapper[4804]: E0128 11:41:20.180382 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="57723f90-020a-42b7-ad6c-49e998417f27" containerName="mariadb-account-create-update" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180387 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="57723f90-020a-42b7-ad6c-49e998417f27" containerName="mariadb-account-create-update" Jan 28 11:41:20 crc kubenswrapper[4804]: E0128 11:41:20.180399 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bfa2fee1-7544-426c-8cbf-17e7a2b1693c" containerName="ovn-config" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180405 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="bfa2fee1-7544-426c-8cbf-17e7a2b1693c" containerName="ovn-config" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180595 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="518f34a2-84c4-4115-a28d-0251d0fa8064" containerName="mariadb-database-create" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180604 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="79faecc7-1388-420a-9eee-b47d0ce87f34" containerName="keystone-db-sync" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180611 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="ba69153d-cb1a-4a90-b52a-19ecc0f5b77a" containerName="mariadb-account-create-update" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180627 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="bfa2fee1-7544-426c-8cbf-17e7a2b1693c" containerName="ovn-config" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180637 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5e6ef64-f8fd-40e1-ab14-2f7ddbdf5bc1" containerName="dnsmasq-dns" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180647 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc6a2a42-6519-46c6-bb24-074e5096001f" containerName="mariadb-database-create" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180656 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="12849043-1f8e-4d1f-aae3-9cbc35ea4361" containerName="mariadb-account-create-update" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180665 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="57723f90-020a-42b7-ad6c-49e998417f27" containerName="mariadb-account-create-update" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.180673 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="04ea6e04-5420-4f5b-911f-cdaede8220ab" containerName="mariadb-database-create" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.182055 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.201328 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-bxbzj"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.219815 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-gczh7"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.221092 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.226723 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.227049 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.227197 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.227431 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xcgbx" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.227631 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.241868 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gczh7"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.308846 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.308933 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzbzs\" (UniqueName: \"kubernetes.io/projected/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-kube-api-access-pzbzs\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.308975 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.308995 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-config\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.309063 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.309092 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.336565 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bnpvd" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.365642 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-2swjk"] Jan 28 11:41:20 crc kubenswrapper[4804]: E0128 11:41:20.366058 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5916f11-436f-46f9-b76e-304aa86f91a1" containerName="glance-db-sync" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.366073 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5916f11-436f-46f9-b76e-304aa86f91a1" containerName="glance-db-sync" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.366237 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5916f11-436f-46f9-b76e-304aa86f91a1" containerName="glance-db-sync" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.366757 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.377252 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.377530 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.377664 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-p4q8k" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.378174 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2swjk"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412129 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-db-sync-config-data\") pod \"d5916f11-436f-46f9-b76e-304aa86f91a1\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412462 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wn8b\" (UniqueName: \"kubernetes.io/projected/d5916f11-436f-46f9-b76e-304aa86f91a1-kube-api-access-9wn8b\") pod \"d5916f11-436f-46f9-b76e-304aa86f91a1\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412592 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-credential-keys\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412618 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzbzs\" (UniqueName: \"kubernetes.io/projected/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-kube-api-access-pzbzs\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412635 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-db-sync-config-data\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412663 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412679 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-config\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412701 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-config-data\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412731 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412753 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412773 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-combined-ca-bundle\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412794 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-config-data\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412809 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-combined-ca-bundle\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412842 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v9bm\" (UniqueName: \"kubernetes.io/projected/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-kube-api-access-5v9bm\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412857 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-fernet-keys\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412872 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-etc-machine-id\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412912 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-scripts\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412942 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwcqn\" (UniqueName: \"kubernetes.io/projected/6dc73391-67e1-4f78-9531-509bcf54be36-kube-api-access-cwcqn\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412963 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.412986 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-scripts\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.414025 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.414817 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-svc\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.415288 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-swift-storage-0\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.415443 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-sb\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.415987 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-config\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.416167 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-nb\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.420036 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.423636 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d5916f11-436f-46f9-b76e-304aa86f91a1" (UID: "d5916f11-436f-46f9-b76e-304aa86f91a1"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.423729 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5916f11-436f-46f9-b76e-304aa86f91a1-kube-api-access-9wn8b" (OuterVolumeSpecName: "kube-api-access-9wn8b") pod "d5916f11-436f-46f9-b76e-304aa86f91a1" (UID: "d5916f11-436f-46f9-b76e-304aa86f91a1"). InnerVolumeSpecName "kube-api-access-9wn8b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.425411 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.425681 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.431829 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.450922 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-b679z"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.451944 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.454816 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.454976 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.455167 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pl59s" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.461907 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-b679z"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.468969 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzbzs\" (UniqueName: \"kubernetes.io/projected/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-kube-api-access-pzbzs\") pod \"dnsmasq-dns-6f8c45789f-bxbzj\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.489530 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-wch49"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.491309 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.494858 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.495414 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-682gl" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.495557 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.514722 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-combined-ca-bundle\") pod \"d5916f11-436f-46f9-b76e-304aa86f91a1\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.514819 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-config-data\") pod \"d5916f11-436f-46f9-b76e-304aa86f91a1\" (UID: \"d5916f11-436f-46f9-b76e-304aa86f91a1\") " Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515205 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-scripts\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515235 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-credential-keys\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515256 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-db-sync-config-data\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515291 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-config-data\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515330 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-combined-ca-bundle\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515351 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-config-data\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515370 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-combined-ca-bundle\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515406 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5v9bm\" (UniqueName: \"kubernetes.io/projected/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-kube-api-access-5v9bm\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515422 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-fernet-keys\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515439 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-etc-machine-id\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515459 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-scripts\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515490 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cwcqn\" (UniqueName: \"kubernetes.io/projected/6dc73391-67e1-4f78-9531-509bcf54be36-kube-api-access-cwcqn\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515544 4804 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.515560 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9wn8b\" (UniqueName: \"kubernetes.io/projected/d5916f11-436f-46f9-b76e-304aa86f91a1-kube-api-access-9wn8b\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.519407 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-etc-machine-id\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.519571 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.522133 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-wch49"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.543939 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-config-data\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.553080 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-combined-ca-bundle\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.554424 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-db-sync-config-data\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.554457 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-fernet-keys\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.555249 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-config-data\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.555839 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-combined-ca-bundle\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.559254 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-scripts\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.559385 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-credential-keys\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.559632 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-scripts\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.560719 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v9bm\" (UniqueName: \"kubernetes.io/projected/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-kube-api-access-5v9bm\") pod \"cinder-db-sync-2swjk\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.565188 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cwcqn\" (UniqueName: \"kubernetes.io/projected/6dc73391-67e1-4f78-9531-509bcf54be36-kube-api-access-cwcqn\") pod \"keystone-bootstrap-gczh7\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.588043 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-9brzz"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.595973 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9brzz" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.597691 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rvw8m" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.598010 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616685 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616726 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44xv5\" (UniqueName: \"kubernetes.io/projected/559981d5-7d2e-4624-a425-53ff3158840a-kube-api-access-44xv5\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616759 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-combined-ca-bundle\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616783 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-scripts\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616805 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-run-httpd\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616838 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b292a47-f331-472d-941e-193e41fee49f-logs\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616857 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-combined-ca-bundle\") pod \"neutron-db-sync-b679z\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616898 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4t5gm\" (UniqueName: \"kubernetes.io/projected/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-kube-api-access-4t5gm\") pod \"neutron-db-sync-b679z\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616914 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-config\") pod \"neutron-db-sync-b679z\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616931 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-config-data\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616948 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-config-data\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616967 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjk67\" (UniqueName: \"kubernetes.io/projected/6b292a47-f331-472d-941e-193e41fee49f-kube-api-access-cjk67\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.616982 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.617002 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-scripts\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.617019 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-log-httpd\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.628351 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-bxbzj"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.631476 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.638055 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d5916f11-436f-46f9-b76e-304aa86f91a1" (UID: "d5916f11-436f-46f9-b76e-304aa86f91a1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.667004 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9brzz"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.684093 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-d6b7l"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.685715 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.686730 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2swjk" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.691792 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-config-data" (OuterVolumeSpecName: "config-data") pod "d5916f11-436f-46f9-b76e-304aa86f91a1" (UID: "d5916f11-436f-46f9-b76e-304aa86f91a1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.703051 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-d6b7l"] Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721318 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjk67\" (UniqueName: \"kubernetes.io/projected/6b292a47-f331-472d-941e-193e41fee49f-kube-api-access-cjk67\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721353 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721379 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-scripts\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721397 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-log-httpd\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721418 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-combined-ca-bundle\") pod \"barbican-db-sync-9brzz\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " pod="openstack/barbican-db-sync-9brzz" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721445 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721464 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44xv5\" (UniqueName: \"kubernetes.io/projected/559981d5-7d2e-4624-a425-53ff3158840a-kube-api-access-44xv5\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721484 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mth7\" (UniqueName: \"kubernetes.io/projected/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-kube-api-access-5mth7\") pod \"barbican-db-sync-9brzz\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " pod="openstack/barbican-db-sync-9brzz" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721503 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-db-sync-config-data\") pod \"barbican-db-sync-9brzz\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " pod="openstack/barbican-db-sync-9brzz" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721531 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-combined-ca-bundle\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721554 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-scripts\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721580 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-run-httpd\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721617 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b292a47-f331-472d-941e-193e41fee49f-logs\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721637 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-combined-ca-bundle\") pod \"neutron-db-sync-b679z\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721667 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4t5gm\" (UniqueName: \"kubernetes.io/projected/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-kube-api-access-4t5gm\") pod \"neutron-db-sync-b679z\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721681 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-config\") pod \"neutron-db-sync-b679z\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721697 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-config-data\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721716 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-config-data\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721755 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.721767 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d5916f11-436f-46f9-b76e-304aa86f91a1-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.726462 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-config-data\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.726714 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.729322 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-run-httpd\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.729404 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b292a47-f331-472d-941e-193e41fee49f-logs\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.730716 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-config\") pod \"neutron-db-sync-b679z\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.730901 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-combined-ca-bundle\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.731146 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-log-httpd\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.732694 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-combined-ca-bundle\") pod \"neutron-db-sync-b679z\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.743303 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-config-data\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.743303 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-scripts\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.744575 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.745381 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-scripts\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.745754 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjk67\" (UniqueName: \"kubernetes.io/projected/6b292a47-f331-472d-941e-193e41fee49f-kube-api-access-cjk67\") pod \"placement-db-sync-wch49\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.746841 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4t5gm\" (UniqueName: \"kubernetes.io/projected/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-kube-api-access-4t5gm\") pod \"neutron-db-sync-b679z\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.751754 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44xv5\" (UniqueName: \"kubernetes.io/projected/559981d5-7d2e-4624-a425-53ff3158840a-kube-api-access-44xv5\") pod \"ceilometer-0\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.823362 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mth7\" (UniqueName: \"kubernetes.io/projected/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-kube-api-access-5mth7\") pod \"barbican-db-sync-9brzz\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " pod="openstack/barbican-db-sync-9brzz" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.823420 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.823451 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-db-sync-config-data\") pod \"barbican-db-sync-9brzz\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " pod="openstack/barbican-db-sync-9brzz" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.823522 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-config\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.823560 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.823609 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rmjx\" (UniqueName: \"kubernetes.io/projected/f59b13cd-bec2-4590-a661-0cf416b68290-kube-api-access-9rmjx\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.823677 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.823747 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-combined-ca-bundle\") pod \"barbican-db-sync-9brzz\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " pod="openstack/barbican-db-sync-9brzz" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.823770 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.835193 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-db-sync-config-data\") pod \"barbican-db-sync-9brzz\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " pod="openstack/barbican-db-sync-9brzz" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.843767 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mth7\" (UniqueName: \"kubernetes.io/projected/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-kube-api-access-5mth7\") pod \"barbican-db-sync-9brzz\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " pod="openstack/barbican-db-sync-9brzz" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.845260 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-combined-ca-bundle\") pod \"barbican-db-sync-9brzz\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " pod="openstack/barbican-db-sync-9brzz" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.893399 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.894642 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-bnpvd" event={"ID":"d5916f11-436f-46f9-b76e-304aa86f91a1","Type":"ContainerDied","Data":"98e4e548f770aa987da379b1ee8df638450d9e9a8748002b4fc5eb02b710f97e"} Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.895081 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98e4e548f770aa987da379b1ee8df638450d9e9a8748002b4fc5eb02b710f97e" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.895156 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-bnpvd" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.921587 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.925028 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rmjx\" (UniqueName: \"kubernetes.io/projected/f59b13cd-bec2-4590-a661-0cf416b68290-kube-api-access-9rmjx\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.925110 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.925162 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.925194 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.925236 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-config\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.925259 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.927571 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-swift-storage-0\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.928598 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-sb\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.929507 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-nb\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.930234 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-config\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.930300 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-svc\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.932710 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wch49" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.945682 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9brzz" Jan 28 11:41:20 crc kubenswrapper[4804]: I0128 11:41:20.974633 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rmjx\" (UniqueName: \"kubernetes.io/projected/f59b13cd-bec2-4590-a661-0cf416b68290-kube-api-access-9rmjx\") pod \"dnsmasq-dns-fcfdd6f9f-d6b7l\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.021014 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.079115 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-bxbzj"] Jan 28 11:41:21 crc kubenswrapper[4804]: W0128 11:41:21.128623 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda7fcfdff_464d_4f4a_b6f6_d5f864fb47e7.slice/crio-3afcc46c48464f655df90f98fc4d7ab253c6f64aa91baa5fb68bd13e31da18a6 WatchSource:0}: Error finding container 3afcc46c48464f655df90f98fc4d7ab253c6f64aa91baa5fb68bd13e31da18a6: Status 404 returned error can't find the container with id 3afcc46c48464f655df90f98fc4d7ab253c6f64aa91baa5fb68bd13e31da18a6 Jan 28 11:41:21 crc kubenswrapper[4804]: E0128 11:41:21.184874 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5916f11_436f_46f9_b76e_304aa86f91a1.slice\": RecentStats: unable to find data in memory cache]" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.313556 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-d6b7l"] Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.387409 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-85r5r"] Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.415962 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.431519 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-85r5r"] Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.468273 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gczh7"] Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.488601 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-2swjk"] Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.540194 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8k47b\" (UniqueName: \"kubernetes.io/projected/91b4be5e-0f8c-495e-869d-38a047276f33-kube-api-access-8k47b\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.540553 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.540745 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.540980 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-config\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.541127 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.541221 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.643355 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-config\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.643441 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.643485 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.643510 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8k47b\" (UniqueName: \"kubernetes.io/projected/91b4be5e-0f8c-495e-869d-38a047276f33-kube-api-access-8k47b\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.643532 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.643584 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.644276 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-config\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.644349 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-svc\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.646156 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-sb\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.652079 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-nb\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.652131 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-swift-storage-0\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.667016 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8k47b\" (UniqueName: \"kubernetes.io/projected/91b4be5e-0f8c-495e-869d-38a047276f33-kube-api-access-8k47b\") pod \"dnsmasq-dns-57c957c4ff-85r5r\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.726049 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-wch49"] Jan 28 11:41:21 crc kubenswrapper[4804]: I0128 11:41:21.740630 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:41:22 crc kubenswrapper[4804]: W0128 11:41:21.769527 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b292a47_f331_472d_941e_193e41fee49f.slice/crio-4fc600c8f61a9b8ec6d1ffdf93634fa090aed774c9c2a83b4350fad5ad1161a7 WatchSource:0}: Error finding container 4fc600c8f61a9b8ec6d1ffdf93634fa090aed774c9c2a83b4350fad5ad1161a7: Status 404 returned error can't find the container with id 4fc600c8f61a9b8ec6d1ffdf93634fa090aed774c9c2a83b4350fad5ad1161a7 Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.881346 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.900559 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-b679z"] Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.914443 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2swjk" event={"ID":"3bd4fedc-8940-48ad-b718-4fbb98e48bf0","Type":"ContainerStarted","Data":"cd5a1fb1b75f267a6c5725321d259dcf2acd5836e7aa0491855baf75e38ef9de"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.916553 4804 generic.go:334] "Generic (PLEG): container finished" podID="a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7" containerID="4c1c923612c015c747b5107243c527ca1074cc2a7e9bd605f2d99365a036305a" exitCode=0 Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.916694 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" event={"ID":"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7","Type":"ContainerDied","Data":"4c1c923612c015c747b5107243c527ca1074cc2a7e9bd605f2d99365a036305a"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.916715 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" event={"ID":"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7","Type":"ContainerStarted","Data":"3afcc46c48464f655df90f98fc4d7ab253c6f64aa91baa5fb68bd13e31da18a6"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.922742 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gczh7" event={"ID":"6dc73391-67e1-4f78-9531-509bcf54be36","Type":"ContainerStarted","Data":"e0578f336cec25aad377224f179ea54ee5afd99b6a706cbe778740c4a7fd261d"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.922776 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gczh7" event={"ID":"6dc73391-67e1-4f78-9531-509bcf54be36","Type":"ContainerStarted","Data":"2f62a3c4a2cd081b1f832339c14d958fdc8b030abf65c3750cc0feb9582f280e"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.925057 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"559981d5-7d2e-4624-a425-53ff3158840a","Type":"ContainerStarted","Data":"fa6e1a12eec8f670dacaf476eeccb44cad0c7ce79723abf8463004426598a522"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.926681 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-9brzz"] Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.926729 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wch49" event={"ID":"6b292a47-f331-472d-941e-193e41fee49f","Type":"ContainerStarted","Data":"4fc600c8f61a9b8ec6d1ffdf93634fa090aed774c9c2a83b4350fad5ad1161a7"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.936825 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-d6b7l"] Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:21.972003 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-gczh7" podStartSLOduration=1.9719835209999999 podStartE2EDuration="1.971983521s" podCreationTimestamp="2026-01-28 11:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:21.958594526 +0000 UTC m=+1157.753474510" watchObservedRunningTime="2026-01-28 11:41:21.971983521 +0000 UTC m=+1157.766863505" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.360777 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.378539 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.388926 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.389959 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dv6zq" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.390209 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.409853 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.487919 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkbdt\" (UniqueName: \"kubernetes.io/projected/77990e19-1287-4a52-a755-927c3fc6f529-kube-api-access-rkbdt\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.487989 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.488047 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.488073 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-scripts\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.488143 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-logs\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.488179 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.488217 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-config-data\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.569384 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.571957 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.579549 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.589447 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-logs\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.589496 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.589536 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-config-data\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.589553 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkbdt\" (UniqueName: \"kubernetes.io/projected/77990e19-1287-4a52-a755-927c3fc6f529-kube-api-access-rkbdt\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.589588 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.589646 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.589672 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-scripts\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.593937 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.596434 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.598230 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-logs\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.605990 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-scripts\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.618720 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.623226 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-config-data\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.628953 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:22 crc kubenswrapper[4804]: E0128 11:41:22.629927 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle glance kube-api-access-rkbdt], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-default-external-api-0" podUID="77990e19-1287-4a52-a755-927c3fc6f529" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.634103 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.643377 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkbdt\" (UniqueName: \"kubernetes.io/projected/77990e19-1287-4a52-a755-927c3fc6f529-kube-api-access-rkbdt\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.667915 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.678961 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.704379 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.704467 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.704516 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2glh\" (UniqueName: \"kubernetes.io/projected/944195d2-3d17-4cc5-84b5-eb204312c37e-kube-api-access-h2glh\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.704546 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.704575 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.704665 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.704708 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-logs\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.805928 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.805983 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.806013 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2glh\" (UniqueName: \"kubernetes.io/projected/944195d2-3d17-4cc5-84b5-eb204312c37e-kube-api-access-h2glh\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.806031 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.806045 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.806107 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.806137 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-logs\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.806664 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-logs\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.806673 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.807684 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.817192 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-config-data\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: E0128 11:41:22.818742 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[combined-ca-bundle glance kube-api-access-h2glh scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/glance-default-internal-api-0" podUID="944195d2-3d17-4cc5-84b5-eb204312c37e" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.837028 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-scripts\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.837619 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.851621 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2glh\" (UniqueName: \"kubernetes.io/projected/944195d2-3d17-4cc5-84b5-eb204312c37e-kube-api-access-h2glh\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.876174 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.939033 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b679z" event={"ID":"e541b2a6-870f-4829-bdfc-ad3e4368ec0b","Type":"ContainerStarted","Data":"39f3d9fd533ba3d14095e02fb7f969a867f9aaeea3368bde1bf4f16b61454f75"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.939070 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b679z" event={"ID":"e541b2a6-870f-4829-bdfc-ad3e4368ec0b","Type":"ContainerStarted","Data":"60b8da9908ec3982ed55579ec364c45546383ec243c42fe055e29512244fd6d9"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.943186 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9brzz" event={"ID":"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf","Type":"ContainerStarted","Data":"17e0e19fde7a47cbcc9cf6fab97dc7b7cdb474a5ae0195fdbdcd149f07b46b07"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.952260 4804 generic.go:334] "Generic (PLEG): container finished" podID="f59b13cd-bec2-4590-a661-0cf416b68290" containerID="927a2ab272600c80f47b45172a165cef75c84adb4271faee004dacfbc0c99580" exitCode=0 Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.952360 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" event={"ID":"f59b13cd-bec2-4590-a661-0cf416b68290","Type":"ContainerDied","Data":"927a2ab272600c80f47b45172a165cef75c84adb4271faee004dacfbc0c99580"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.952387 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" event={"ID":"f59b13cd-bec2-4590-a661-0cf416b68290","Type":"ContainerStarted","Data":"51ff06357611f23f4eba2b45be00b86265a17d7ddf09ab8ad09dd74930e3724b"} Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.952780 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.952869 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:22 crc kubenswrapper[4804]: I0128 11:41:22.968821 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-b679z" podStartSLOduration=2.968794628 podStartE2EDuration="2.968794628s" podCreationTimestamp="2026-01-28 11:41:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:22.964701857 +0000 UTC m=+1158.759581841" watchObservedRunningTime="2026-01-28 11:41:22.968794628 +0000 UTC m=+1158.763674602" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.004109 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.004374 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.113464 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkbdt\" (UniqueName: \"kubernetes.io/projected/77990e19-1287-4a52-a755-927c3fc6f529-kube-api-access-rkbdt\") pod \"77990e19-1287-4a52-a755-927c3fc6f529\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.113794 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"77990e19-1287-4a52-a755-927c3fc6f529\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.113840 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-combined-ca-bundle\") pod \"944195d2-3d17-4cc5-84b5-eb204312c37e\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.113894 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-logs\") pod \"944195d2-3d17-4cc5-84b5-eb204312c37e\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.113937 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-logs\") pod \"77990e19-1287-4a52-a755-927c3fc6f529\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.113996 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-combined-ca-bundle\") pod \"77990e19-1287-4a52-a755-927c3fc6f529\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.114017 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-httpd-run\") pod \"944195d2-3d17-4cc5-84b5-eb204312c37e\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.114063 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-httpd-run\") pod \"77990e19-1287-4a52-a755-927c3fc6f529\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.114107 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"944195d2-3d17-4cc5-84b5-eb204312c37e\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.114174 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-config-data\") pod \"944195d2-3d17-4cc5-84b5-eb204312c37e\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.114214 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-scripts\") pod \"77990e19-1287-4a52-a755-927c3fc6f529\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.114239 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2glh\" (UniqueName: \"kubernetes.io/projected/944195d2-3d17-4cc5-84b5-eb204312c37e-kube-api-access-h2glh\") pod \"944195d2-3d17-4cc5-84b5-eb204312c37e\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.114268 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-config-data\") pod \"77990e19-1287-4a52-a755-927c3fc6f529\" (UID: \"77990e19-1287-4a52-a755-927c3fc6f529\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.114289 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-scripts\") pod \"944195d2-3d17-4cc5-84b5-eb204312c37e\" (UID: \"944195d2-3d17-4cc5-84b5-eb204312c37e\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.120240 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "944195d2-3d17-4cc5-84b5-eb204312c37e" (UID: "944195d2-3d17-4cc5-84b5-eb204312c37e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.121094 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "77990e19-1287-4a52-a755-927c3fc6f529" (UID: "77990e19-1287-4a52-a755-927c3fc6f529"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.125927 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-logs" (OuterVolumeSpecName: "logs") pod "944195d2-3d17-4cc5-84b5-eb204312c37e" (UID: "944195d2-3d17-4cc5-84b5-eb204312c37e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.127149 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-scripts" (OuterVolumeSpecName: "scripts") pod "944195d2-3d17-4cc5-84b5-eb204312c37e" (UID: "944195d2-3d17-4cc5-84b5-eb204312c37e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.127802 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-config-data" (OuterVolumeSpecName: "config-data") pod "944195d2-3d17-4cc5-84b5-eb204312c37e" (UID: "944195d2-3d17-4cc5-84b5-eb204312c37e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.133643 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-logs" (OuterVolumeSpecName: "logs") pod "77990e19-1287-4a52-a755-927c3fc6f529" (UID: "77990e19-1287-4a52-a755-927c3fc6f529"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.138223 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77990e19-1287-4a52-a755-927c3fc6f529-kube-api-access-rkbdt" (OuterVolumeSpecName: "kube-api-access-rkbdt") pod "77990e19-1287-4a52-a755-927c3fc6f529" (UID: "77990e19-1287-4a52-a755-927c3fc6f529"). InnerVolumeSpecName "kube-api-access-rkbdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.140665 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "77990e19-1287-4a52-a755-927c3fc6f529" (UID: "77990e19-1287-4a52-a755-927c3fc6f529"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.148281 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "944195d2-3d17-4cc5-84b5-eb204312c37e" (UID: "944195d2-3d17-4cc5-84b5-eb204312c37e"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.149653 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/944195d2-3d17-4cc5-84b5-eb204312c37e-kube-api-access-h2glh" (OuterVolumeSpecName: "kube-api-access-h2glh") pod "944195d2-3d17-4cc5-84b5-eb204312c37e" (UID: "944195d2-3d17-4cc5-84b5-eb204312c37e"). InnerVolumeSpecName "kube-api-access-h2glh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.152924 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-scripts" (OuterVolumeSpecName: "scripts") pod "77990e19-1287-4a52-a755-927c3fc6f529" (UID: "77990e19-1287-4a52-a755-927c3fc6f529"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.154028 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "77990e19-1287-4a52-a755-927c3fc6f529" (UID: "77990e19-1287-4a52-a755-927c3fc6f529"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.159272 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-config-data" (OuterVolumeSpecName: "config-data") pod "77990e19-1287-4a52-a755-927c3fc6f529" (UID: "77990e19-1287-4a52-a755-927c3fc6f529"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.160143 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "944195d2-3d17-4cc5-84b5-eb204312c37e" (UID: "944195d2-3d17-4cc5-84b5-eb204312c37e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.183106 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.217602 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.217643 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.217657 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.217668 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.217704 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.217714 4804 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/944195d2-3d17-4cc5-84b5-eb204312c37e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.217724 4804 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/77990e19-1287-4a52-a755-927c3fc6f529-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.217769 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.217781 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.217792 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.217802 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2glh\" (UniqueName: \"kubernetes.io/projected/944195d2-3d17-4cc5-84b5-eb204312c37e-kube-api-access-h2glh\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.235314 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/77990e19-1287-4a52-a755-927c3fc6f529-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.235344 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/944195d2-3d17-4cc5-84b5-eb204312c37e-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.235355 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkbdt\" (UniqueName: \"kubernetes.io/projected/77990e19-1287-4a52-a755-927c3fc6f529-kube-api-access-rkbdt\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.246329 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.267276 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.342023 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.342050 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.413946 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-85r5r"] Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.470632 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.477949 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.546748 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-nb\") pod \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.548164 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-sb\") pod \"f59b13cd-bec2-4590-a661-0cf416b68290\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.548271 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rmjx\" (UniqueName: \"kubernetes.io/projected/f59b13cd-bec2-4590-a661-0cf416b68290-kube-api-access-9rmjx\") pod \"f59b13cd-bec2-4590-a661-0cf416b68290\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.548313 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-svc\") pod \"f59b13cd-bec2-4590-a661-0cf416b68290\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.548341 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-swift-storage-0\") pod \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.548366 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-config\") pod \"f59b13cd-bec2-4590-a661-0cf416b68290\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.548440 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-svc\") pod \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.548476 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzbzs\" (UniqueName: \"kubernetes.io/projected/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-kube-api-access-pzbzs\") pod \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.548563 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-nb\") pod \"f59b13cd-bec2-4590-a661-0cf416b68290\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.548599 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-sb\") pod \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.548706 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-config\") pod \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\" (UID: \"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.548766 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-swift-storage-0\") pod \"f59b13cd-bec2-4590-a661-0cf416b68290\" (UID: \"f59b13cd-bec2-4590-a661-0cf416b68290\") " Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.561075 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-kube-api-access-pzbzs" (OuterVolumeSpecName: "kube-api-access-pzbzs") pod "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7" (UID: "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7"). InnerVolumeSpecName "kube-api-access-pzbzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.561709 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f59b13cd-bec2-4590-a661-0cf416b68290-kube-api-access-9rmjx" (OuterVolumeSpecName: "kube-api-access-9rmjx") pod "f59b13cd-bec2-4590-a661-0cf416b68290" (UID: "f59b13cd-bec2-4590-a661-0cf416b68290"). InnerVolumeSpecName "kube-api-access-9rmjx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.577345 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f59b13cd-bec2-4590-a661-0cf416b68290" (UID: "f59b13cd-bec2-4590-a661-0cf416b68290"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.578030 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f59b13cd-bec2-4590-a661-0cf416b68290" (UID: "f59b13cd-bec2-4590-a661-0cf416b68290"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.583498 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7" (UID: "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.588792 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-config" (OuterVolumeSpecName: "config") pod "f59b13cd-bec2-4590-a661-0cf416b68290" (UID: "f59b13cd-bec2-4590-a661-0cf416b68290"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.591543 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7" (UID: "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.601859 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-config" (OuterVolumeSpecName: "config") pod "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7" (UID: "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.606414 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7" (UID: "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.618602 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f59b13cd-bec2-4590-a661-0cf416b68290" (UID: "f59b13cd-bec2-4590-a661-0cf416b68290"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.619719 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f59b13cd-bec2-4590-a661-0cf416b68290" (UID: "f59b13cd-bec2-4590-a661-0cf416b68290"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.629535 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7" (UID: "a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653124 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653158 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9rmjx\" (UniqueName: \"kubernetes.io/projected/f59b13cd-bec2-4590-a661-0cf416b68290-kube-api-access-9rmjx\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653171 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653180 4804 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653189 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653198 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653207 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzbzs\" (UniqueName: \"kubernetes.io/projected/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-kube-api-access-pzbzs\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653215 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653224 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653232 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653239 4804 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f59b13cd-bec2-4590-a661-0cf416b68290-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.653248 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.974204 4804 generic.go:334] "Generic (PLEG): container finished" podID="91b4be5e-0f8c-495e-869d-38a047276f33" containerID="653ef14818c2af14b35bf5c8eff2142bb2b6b74279ede6a70a0def4afe23f6e5" exitCode=0 Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.974276 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" event={"ID":"91b4be5e-0f8c-495e-869d-38a047276f33","Type":"ContainerDied","Data":"653ef14818c2af14b35bf5c8eff2142bb2b6b74279ede6a70a0def4afe23f6e5"} Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.974307 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" event={"ID":"91b4be5e-0f8c-495e-869d-38a047276f33","Type":"ContainerStarted","Data":"4daa812f368862258a1d55a15b7d75718ffc99b127c66500a75ea826f368eb02"} Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.978448 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" event={"ID":"a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7","Type":"ContainerDied","Data":"3afcc46c48464f655df90f98fc4d7ab253c6f64aa91baa5fb68bd13e31da18a6"} Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.978507 4804 scope.go:117] "RemoveContainer" containerID="4c1c923612c015c747b5107243c527ca1074cc2a7e9bd605f2d99365a036305a" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.978620 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6f8c45789f-bxbzj" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.988241 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" event={"ID":"f59b13cd-bec2-4590-a661-0cf416b68290","Type":"ContainerDied","Data":"51ff06357611f23f4eba2b45be00b86265a17d7ddf09ab8ad09dd74930e3724b"} Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.988380 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-fcfdd6f9f-d6b7l" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.988435 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:41:23 crc kubenswrapper[4804]: I0128 11:41:23.988480 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.018650 4804 scope.go:117] "RemoveContainer" containerID="927a2ab272600c80f47b45172a165cef75c84adb4271faee004dacfbc0c99580" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.099077 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-d6b7l"] Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.116728 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-fcfdd6f9f-d6b7l"] Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.142063 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.169676 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.180205 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:24 crc kubenswrapper[4804]: E0128 11:41:24.180603 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7" containerName="init" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.180619 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7" containerName="init" Jan 28 11:41:24 crc kubenswrapper[4804]: E0128 11:41:24.180654 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f59b13cd-bec2-4590-a661-0cf416b68290" containerName="init" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.180659 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f59b13cd-bec2-4590-a661-0cf416b68290" containerName="init" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.180811 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7" containerName="init" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.180832 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f59b13cd-bec2-4590-a661-0cf416b68290" containerName="init" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.181800 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.189148 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-dv6zq" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.189275 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.189520 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.208713 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-bxbzj"] Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.220946 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6f8c45789f-bxbzj"] Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.227827 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.273675 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.276393 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.289936 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.291472 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.293139 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pdb4\" (UniqueName: \"kubernetes.io/projected/395d12eb-6bd8-4dc2-a026-d37da116fa0d-kube-api-access-5pdb4\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.293164 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.293194 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.293214 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-config-data\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.293301 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.293325 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-scripts\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.293340 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-logs\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.293483 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.294295 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.395803 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.396289 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-config-data\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.396372 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-config-data\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.396410 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.396693 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-scripts\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.397476 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.397569 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.397625 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l645w\" (UniqueName: \"kubernetes.io/projected/df00e734-18a4-4614-b272-1d914b5e39ce-kube-api-access-l645w\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.397652 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.397761 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-logs\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.397807 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-scripts\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.397826 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-logs\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.397905 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pdb4\" (UniqueName: \"kubernetes.io/projected/395d12eb-6bd8-4dc2-a026-d37da116fa0d-kube-api-access-5pdb4\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.397936 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.398396 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.398790 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-logs\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.398849 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.401874 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.406600 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-scripts\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.410562 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-config-data\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.420870 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pdb4\" (UniqueName: \"kubernetes.io/projected/395d12eb-6bd8-4dc2-a026-d37da116fa0d-kube-api-access-5pdb4\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.457191 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.500332 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-config-data\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.500415 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.500502 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-scripts\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.500530 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.500573 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.500599 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l645w\" (UniqueName: \"kubernetes.io/projected/df00e734-18a4-4614-b272-1d914b5e39ce-kube-api-access-l645w\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.500657 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-logs\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.501180 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.503167 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-logs\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.503248 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.505325 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-config-data\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.513042 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.517417 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-scripts\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.517759 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.518714 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l645w\" (UniqueName: \"kubernetes.io/projected/df00e734-18a4-4614-b272-1d914b5e39ce-kube-api-access-l645w\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.533019 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.702118 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.929536 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77990e19-1287-4a52-a755-927c3fc6f529" path="/var/lib/kubelet/pods/77990e19-1287-4a52-a755-927c3fc6f529/volumes" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.930330 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="944195d2-3d17-4cc5-84b5-eb204312c37e" path="/var/lib/kubelet/pods/944195d2-3d17-4cc5-84b5-eb204312c37e/volumes" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.930762 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7" path="/var/lib/kubelet/pods/a7fcfdff-464d-4f4a-b6f6-d5f864fb47e7/volumes" Jan 28 11:41:24 crc kubenswrapper[4804]: I0128 11:41:24.931479 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f59b13cd-bec2-4590-a661-0cf416b68290" path="/var/lib/kubelet/pods/f59b13cd-bec2-4590-a661-0cf416b68290/volumes" Jan 28 11:41:25 crc kubenswrapper[4804]: I0128 11:41:25.058191 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" event={"ID":"91b4be5e-0f8c-495e-869d-38a047276f33","Type":"ContainerStarted","Data":"793e56501f603507de08462d6163102c6e75fc7f6d8874ef3f2a6c93cde5476a"} Jan 28 11:41:25 crc kubenswrapper[4804]: I0128 11:41:25.058381 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:25 crc kubenswrapper[4804]: I0128 11:41:25.092610 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" podStartSLOduration=4.092579383 podStartE2EDuration="4.092579383s" podCreationTimestamp="2026-01-28 11:41:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:25.079948631 +0000 UTC m=+1160.874828615" watchObservedRunningTime="2026-01-28 11:41:25.092579383 +0000 UTC m=+1160.887459367" Jan 28 11:41:25 crc kubenswrapper[4804]: I0128 11:41:25.141423 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:25 crc kubenswrapper[4804]: W0128 11:41:25.155504 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod395d12eb_6bd8_4dc2_a026_d37da116fa0d.slice/crio-df977f466b34eaa34d558411f065a3a70d1d671f58ce1242209b0b7a7377cdf0 WatchSource:0}: Error finding container df977f466b34eaa34d558411f065a3a70d1d671f58ce1242209b0b7a7377cdf0: Status 404 returned error can't find the container with id df977f466b34eaa34d558411f065a3a70d1d671f58ce1242209b0b7a7377cdf0 Jan 28 11:41:25 crc kubenswrapper[4804]: I0128 11:41:25.303676 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:25 crc kubenswrapper[4804]: W0128 11:41:25.313396 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddf00e734_18a4_4614_b272_1d914b5e39ce.slice/crio-5fd49444fe3ee038e54635fea87acd84a00e42d07f56ba45b5c0e1dc565c8aac WatchSource:0}: Error finding container 5fd49444fe3ee038e54635fea87acd84a00e42d07f56ba45b5c0e1dc565c8aac: Status 404 returned error can't find the container with id 5fd49444fe3ee038e54635fea87acd84a00e42d07f56ba45b5c0e1dc565c8aac Jan 28 11:41:26 crc kubenswrapper[4804]: I0128 11:41:26.133532 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"395d12eb-6bd8-4dc2-a026-d37da116fa0d","Type":"ContainerStarted","Data":"2ebff81ae6eb2c2b74f0fe5476b4e0026ec75898d6d7e782d262a6913f2daa22"} Jan 28 11:41:26 crc kubenswrapper[4804]: I0128 11:41:26.133897 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"395d12eb-6bd8-4dc2-a026-d37da116fa0d","Type":"ContainerStarted","Data":"df977f466b34eaa34d558411f065a3a70d1d671f58ce1242209b0b7a7377cdf0"} Jan 28 11:41:26 crc kubenswrapper[4804]: I0128 11:41:26.137140 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"df00e734-18a4-4614-b272-1d914b5e39ce","Type":"ContainerStarted","Data":"5fd49444fe3ee038e54635fea87acd84a00e42d07f56ba45b5c0e1dc565c8aac"} Jan 28 11:41:27 crc kubenswrapper[4804]: I0128 11:41:27.150654 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"df00e734-18a4-4614-b272-1d914b5e39ce","Type":"ContainerStarted","Data":"97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7"} Jan 28 11:41:27 crc kubenswrapper[4804]: I0128 11:41:27.154693 4804 generic.go:334] "Generic (PLEG): container finished" podID="6dc73391-67e1-4f78-9531-509bcf54be36" containerID="e0578f336cec25aad377224f179ea54ee5afd99b6a706cbe778740c4a7fd261d" exitCode=0 Jan 28 11:41:27 crc kubenswrapper[4804]: I0128 11:41:27.154783 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gczh7" event={"ID":"6dc73391-67e1-4f78-9531-509bcf54be36","Type":"ContainerDied","Data":"e0578f336cec25aad377224f179ea54ee5afd99b6a706cbe778740c4a7fd261d"} Jan 28 11:41:27 crc kubenswrapper[4804]: I0128 11:41:27.158359 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"395d12eb-6bd8-4dc2-a026-d37da116fa0d","Type":"ContainerStarted","Data":"cc071da784f4e94aefc65159f0222cdbc3463f7289d98e8eb421054b7ca1199f"} Jan 28 11:41:27 crc kubenswrapper[4804]: I0128 11:41:27.195376 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=3.19535915 podStartE2EDuration="3.19535915s" podCreationTimestamp="2026-01-28 11:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:27.190172385 +0000 UTC m=+1162.985052369" watchObservedRunningTime="2026-01-28 11:41:27.19535915 +0000 UTC m=+1162.990239134" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.321318 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.496519 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-config-data\") pod \"6dc73391-67e1-4f78-9531-509bcf54be36\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.496575 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-scripts\") pod \"6dc73391-67e1-4f78-9531-509bcf54be36\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.496708 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cwcqn\" (UniqueName: \"kubernetes.io/projected/6dc73391-67e1-4f78-9531-509bcf54be36-kube-api-access-cwcqn\") pod \"6dc73391-67e1-4f78-9531-509bcf54be36\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.496757 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-credential-keys\") pod \"6dc73391-67e1-4f78-9531-509bcf54be36\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.496805 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-fernet-keys\") pod \"6dc73391-67e1-4f78-9531-509bcf54be36\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.496860 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-combined-ca-bundle\") pod \"6dc73391-67e1-4f78-9531-509bcf54be36\" (UID: \"6dc73391-67e1-4f78-9531-509bcf54be36\") " Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.501598 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "6dc73391-67e1-4f78-9531-509bcf54be36" (UID: "6dc73391-67e1-4f78-9531-509bcf54be36"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.502102 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6dc73391-67e1-4f78-9531-509bcf54be36-kube-api-access-cwcqn" (OuterVolumeSpecName: "kube-api-access-cwcqn") pod "6dc73391-67e1-4f78-9531-509bcf54be36" (UID: "6dc73391-67e1-4f78-9531-509bcf54be36"). InnerVolumeSpecName "kube-api-access-cwcqn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.503073 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-scripts" (OuterVolumeSpecName: "scripts") pod "6dc73391-67e1-4f78-9531-509bcf54be36" (UID: "6dc73391-67e1-4f78-9531-509bcf54be36"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.504054 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "6dc73391-67e1-4f78-9531-509bcf54be36" (UID: "6dc73391-67e1-4f78-9531-509bcf54be36"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.525416 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6dc73391-67e1-4f78-9531-509bcf54be36" (UID: "6dc73391-67e1-4f78-9531-509bcf54be36"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.527568 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-config-data" (OuterVolumeSpecName: "config-data") pod "6dc73391-67e1-4f78-9531-509bcf54be36" (UID: "6dc73391-67e1-4f78-9531-509bcf54be36"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.599421 4804 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.599476 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.599492 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.599504 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.599516 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cwcqn\" (UniqueName: \"kubernetes.io/projected/6dc73391-67e1-4f78-9531-509bcf54be36-kube-api-access-cwcqn\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:29 crc kubenswrapper[4804]: I0128 11:41:29.599528 4804 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/6dc73391-67e1-4f78-9531-509bcf54be36-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.189226 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gczh7" event={"ID":"6dc73391-67e1-4f78-9531-509bcf54be36","Type":"ContainerDied","Data":"2f62a3c4a2cd081b1f832339c14d958fdc8b030abf65c3750cc0feb9582f280e"} Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.189272 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f62a3c4a2cd081b1f832339c14d958fdc8b030abf65c3750cc0feb9582f280e" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.189309 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gczh7" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.395440 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-gczh7"] Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.402254 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-gczh7"] Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.496214 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-qmm7h"] Jan 28 11:41:30 crc kubenswrapper[4804]: E0128 11:41:30.496669 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6dc73391-67e1-4f78-9531-509bcf54be36" containerName="keystone-bootstrap" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.496689 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="6dc73391-67e1-4f78-9531-509bcf54be36" containerName="keystone-bootstrap" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.497012 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="6dc73391-67e1-4f78-9531-509bcf54be36" containerName="keystone-bootstrap" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.497715 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.500019 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.500045 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.500345 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xcgbx" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.500362 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.500492 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.506944 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qmm7h"] Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.615989 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4hmv\" (UniqueName: \"kubernetes.io/projected/8686dbae-d7dd-4662-81a8-ab51cc85a115-kube-api-access-m4hmv\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.616088 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-credential-keys\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.616149 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-fernet-keys\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.616178 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-config-data\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.616217 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-combined-ca-bundle\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.616242 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-scripts\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.650325 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.650587 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="395d12eb-6bd8-4dc2-a026-d37da116fa0d" containerName="glance-log" containerID="cri-o://2ebff81ae6eb2c2b74f0fe5476b4e0026ec75898d6d7e782d262a6913f2daa22" gracePeriod=30 Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.650675 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="395d12eb-6bd8-4dc2-a026-d37da116fa0d" containerName="glance-httpd" containerID="cri-o://cc071da784f4e94aefc65159f0222cdbc3463f7289d98e8eb421054b7ca1199f" gracePeriod=30 Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.718216 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4hmv\" (UniqueName: \"kubernetes.io/projected/8686dbae-d7dd-4662-81a8-ab51cc85a115-kube-api-access-m4hmv\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.718570 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-credential-keys\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.718603 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-fernet-keys\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.718624 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-config-data\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.718658 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-combined-ca-bundle\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.718681 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-scripts\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.723973 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-combined-ca-bundle\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.737653 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-credential-keys\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.737699 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-config-data\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.740377 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-fernet-keys\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.740843 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-scripts\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.743239 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4hmv\" (UniqueName: \"kubernetes.io/projected/8686dbae-d7dd-4662-81a8-ab51cc85a115-kube-api-access-m4hmv\") pod \"keystone-bootstrap-qmm7h\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.754858 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.818233 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:41:30 crc kubenswrapper[4804]: I0128 11:41:30.927289 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6dc73391-67e1-4f78-9531-509bcf54be36" path="/var/lib/kubelet/pods/6dc73391-67e1-4f78-9531-509bcf54be36/volumes" Jan 28 11:41:31 crc kubenswrapper[4804]: I0128 11:41:31.207439 4804 generic.go:334] "Generic (PLEG): container finished" podID="395d12eb-6bd8-4dc2-a026-d37da116fa0d" containerID="cc071da784f4e94aefc65159f0222cdbc3463f7289d98e8eb421054b7ca1199f" exitCode=0 Jan 28 11:41:31 crc kubenswrapper[4804]: I0128 11:41:31.207468 4804 generic.go:334] "Generic (PLEG): container finished" podID="395d12eb-6bd8-4dc2-a026-d37da116fa0d" containerID="2ebff81ae6eb2c2b74f0fe5476b4e0026ec75898d6d7e782d262a6913f2daa22" exitCode=143 Jan 28 11:41:31 crc kubenswrapper[4804]: I0128 11:41:31.207491 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"395d12eb-6bd8-4dc2-a026-d37da116fa0d","Type":"ContainerDied","Data":"cc071da784f4e94aefc65159f0222cdbc3463f7289d98e8eb421054b7ca1199f"} Jan 28 11:41:31 crc kubenswrapper[4804]: I0128 11:41:31.207521 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"395d12eb-6bd8-4dc2-a026-d37da116fa0d","Type":"ContainerDied","Data":"2ebff81ae6eb2c2b74f0fe5476b4e0026ec75898d6d7e782d262a6913f2daa22"} Jan 28 11:41:31 crc kubenswrapper[4804]: I0128 11:41:31.883210 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:41:31 crc kubenswrapper[4804]: I0128 11:41:31.967760 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-b7zpn"] Jan 28 11:41:31 crc kubenswrapper[4804]: I0128 11:41:31.968057 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" podUID="46956e08-e267-4021-bf42-69a3e35826e0" containerName="dnsmasq-dns" containerID="cri-o://300764162d368ffd0de5fe83665369a6aa0f7d774b37e1517a5bc0a601f70ad4" gracePeriod=10 Jan 28 11:41:32 crc kubenswrapper[4804]: I0128 11:41:32.096359 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" podUID="46956e08-e267-4021-bf42-69a3e35826e0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: connect: connection refused" Jan 28 11:41:32 crc kubenswrapper[4804]: I0128 11:41:32.222599 4804 generic.go:334] "Generic (PLEG): container finished" podID="46956e08-e267-4021-bf42-69a3e35826e0" containerID="300764162d368ffd0de5fe83665369a6aa0f7d774b37e1517a5bc0a601f70ad4" exitCode=0 Jan 28 11:41:32 crc kubenswrapper[4804]: I0128 11:41:32.222736 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" event={"ID":"46956e08-e267-4021-bf42-69a3e35826e0","Type":"ContainerDied","Data":"300764162d368ffd0de5fe83665369a6aa0f7d774b37e1517a5bc0a601f70ad4"} Jan 28 11:41:41 crc kubenswrapper[4804]: I0128 11:41:41.305547 4804 generic.go:334] "Generic (PLEG): container finished" podID="e541b2a6-870f-4829-bdfc-ad3e4368ec0b" containerID="39f3d9fd533ba3d14095e02fb7f969a867f9aaeea3368bde1bf4f16b61454f75" exitCode=0 Jan 28 11:41:41 crc kubenswrapper[4804]: I0128 11:41:41.305690 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b679z" event={"ID":"e541b2a6-870f-4829-bdfc-ad3e4368ec0b","Type":"ContainerDied","Data":"39f3d9fd533ba3d14095e02fb7f969a867f9aaeea3368bde1bf4f16b61454f75"} Jan 28 11:41:42 crc kubenswrapper[4804]: I0128 11:41:42.097646 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" podUID="46956e08-e267-4021-bf42-69a3e35826e0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 28 11:41:47 crc kubenswrapper[4804]: I0128 11:41:47.108842 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" podUID="46956e08-e267-4021-bf42-69a3e35826e0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 28 11:41:47 crc kubenswrapper[4804]: I0128 11:41:47.110315 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.827867 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.833209 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.838057 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.986591 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-logs\") pod \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.986653 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-svc\") pod \"46956e08-e267-4021-bf42-69a3e35826e0\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.986673 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.986704 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-combined-ca-bundle\") pod \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.986755 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-scripts\") pod \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987394 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-logs" (OuterVolumeSpecName: "logs") pod "395d12eb-6bd8-4dc2-a026-d37da116fa0d" (UID: "395d12eb-6bd8-4dc2-a026-d37da116fa0d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987602 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-sb\") pod \"46956e08-e267-4021-bf42-69a3e35826e0\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987639 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-config\") pod \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987679 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-swift-storage-0\") pod \"46956e08-e267-4021-bf42-69a3e35826e0\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987713 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-nb\") pod \"46956e08-e267-4021-bf42-69a3e35826e0\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987785 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-config\") pod \"46956e08-e267-4021-bf42-69a3e35826e0\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987808 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5pdb4\" (UniqueName: \"kubernetes.io/projected/395d12eb-6bd8-4dc2-a026-d37da116fa0d-kube-api-access-5pdb4\") pod \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987830 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-httpd-run\") pod \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987853 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjp6c\" (UniqueName: \"kubernetes.io/projected/46956e08-e267-4021-bf42-69a3e35826e0-kube-api-access-hjp6c\") pod \"46956e08-e267-4021-bf42-69a3e35826e0\" (UID: \"46956e08-e267-4021-bf42-69a3e35826e0\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987894 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-config-data\") pod \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\" (UID: \"395d12eb-6bd8-4dc2-a026-d37da116fa0d\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987915 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-combined-ca-bundle\") pod \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.987939 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4t5gm\" (UniqueName: \"kubernetes.io/projected/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-kube-api-access-4t5gm\") pod \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\" (UID: \"e541b2a6-870f-4829-bdfc-ad3e4368ec0b\") " Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.988348 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:48 crc kubenswrapper[4804]: I0128 11:41:48.990098 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "395d12eb-6bd8-4dc2-a026-d37da116fa0d" (UID: "395d12eb-6bd8-4dc2-a026-d37da116fa0d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.014582 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-kube-api-access-4t5gm" (OuterVolumeSpecName: "kube-api-access-4t5gm") pod "e541b2a6-870f-4829-bdfc-ad3e4368ec0b" (UID: "e541b2a6-870f-4829-bdfc-ad3e4368ec0b"). InnerVolumeSpecName "kube-api-access-4t5gm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.017528 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46956e08-e267-4021-bf42-69a3e35826e0-kube-api-access-hjp6c" (OuterVolumeSpecName: "kube-api-access-hjp6c") pod "46956e08-e267-4021-bf42-69a3e35826e0" (UID: "46956e08-e267-4021-bf42-69a3e35826e0"). InnerVolumeSpecName "kube-api-access-hjp6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.027690 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-scripts" (OuterVolumeSpecName: "scripts") pod "395d12eb-6bd8-4dc2-a026-d37da116fa0d" (UID: "395d12eb-6bd8-4dc2-a026-d37da116fa0d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.027905 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "395d12eb-6bd8-4dc2-a026-d37da116fa0d" (UID: "395d12eb-6bd8-4dc2-a026-d37da116fa0d"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.029573 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/395d12eb-6bd8-4dc2-a026-d37da116fa0d-kube-api-access-5pdb4" (OuterVolumeSpecName: "kube-api-access-5pdb4") pod "395d12eb-6bd8-4dc2-a026-d37da116fa0d" (UID: "395d12eb-6bd8-4dc2-a026-d37da116fa0d"). InnerVolumeSpecName "kube-api-access-5pdb4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.045773 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-config" (OuterVolumeSpecName: "config") pod "e541b2a6-870f-4829-bdfc-ad3e4368ec0b" (UID: "e541b2a6-870f-4829-bdfc-ad3e4368ec0b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.049278 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e541b2a6-870f-4829-bdfc-ad3e4368ec0b" (UID: "e541b2a6-870f-4829-bdfc-ad3e4368ec0b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.071320 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-config-data" (OuterVolumeSpecName: "config-data") pod "395d12eb-6bd8-4dc2-a026-d37da116fa0d" (UID: "395d12eb-6bd8-4dc2-a026-d37da116fa0d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.072584 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "46956e08-e267-4021-bf42-69a3e35826e0" (UID: "46956e08-e267-4021-bf42-69a3e35826e0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.073959 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "46956e08-e267-4021-bf42-69a3e35826e0" (UID: "46956e08-e267-4021-bf42-69a3e35826e0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.074484 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "395d12eb-6bd8-4dc2-a026-d37da116fa0d" (UID: "395d12eb-6bd8-4dc2-a026-d37da116fa0d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.084637 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-config" (OuterVolumeSpecName: "config") pod "46956e08-e267-4021-bf42-69a3e35826e0" (UID: "46956e08-e267-4021-bf42-69a3e35826e0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.090417 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "46956e08-e267-4021-bf42-69a3e35826e0" (UID: "46956e08-e267-4021-bf42-69a3e35826e0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.090934 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.090983 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.090996 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091007 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5pdb4\" (UniqueName: \"kubernetes.io/projected/395d12eb-6bd8-4dc2-a026-d37da116fa0d-kube-api-access-5pdb4\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091017 4804 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/395d12eb-6bd8-4dc2-a026-d37da116fa0d-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091026 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjp6c\" (UniqueName: \"kubernetes.io/projected/46956e08-e267-4021-bf42-69a3e35826e0-kube-api-access-hjp6c\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091035 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091043 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091052 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4t5gm\" (UniqueName: \"kubernetes.io/projected/e541b2a6-870f-4829-bdfc-ad3e4368ec0b-kube-api-access-4t5gm\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091060 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091101 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091093 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "46956e08-e267-4021-bf42-69a3e35826e0" (UID: "46956e08-e267-4021-bf42-69a3e35826e0"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091112 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091169 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/395d12eb-6bd8-4dc2-a026-d37da116fa0d-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.091195 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.111074 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.193653 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.193730 4804 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/46956e08-e267-4021-bf42-69a3e35826e0-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.382054 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-b679z" event={"ID":"e541b2a6-870f-4829-bdfc-ad3e4368ec0b","Type":"ContainerDied","Data":"60b8da9908ec3982ed55579ec364c45546383ec243c42fe055e29512244fd6d9"} Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.382093 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="60b8da9908ec3982ed55579ec364c45546383ec243c42fe055e29512244fd6d9" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.382148 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-b679z" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.387201 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" event={"ID":"46956e08-e267-4021-bf42-69a3e35826e0","Type":"ContainerDied","Data":"77e9032d4b1d0896ab98b1033b917f2c0d9b702e320f4756d982cdbd575cb2f8"} Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.387293 4804 scope.go:117] "RemoveContainer" containerID="300764162d368ffd0de5fe83665369a6aa0f7d774b37e1517a5bc0a601f70ad4" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.387672 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.394412 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"395d12eb-6bd8-4dc2-a026-d37da116fa0d","Type":"ContainerDied","Data":"df977f466b34eaa34d558411f065a3a70d1d671f58ce1242209b0b7a7377cdf0"} Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.394494 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.455012 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.473298 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.489066 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-b7zpn"] Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.503043 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d5b6d6b67-b7zpn"] Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.514667 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:49 crc kubenswrapper[4804]: E0128 11:41:49.515108 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395d12eb-6bd8-4dc2-a026-d37da116fa0d" containerName="glance-log" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.515126 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="395d12eb-6bd8-4dc2-a026-d37da116fa0d" containerName="glance-log" Jan 28 11:41:49 crc kubenswrapper[4804]: E0128 11:41:49.515152 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="395d12eb-6bd8-4dc2-a026-d37da116fa0d" containerName="glance-httpd" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.515159 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="395d12eb-6bd8-4dc2-a026-d37da116fa0d" containerName="glance-httpd" Jan 28 11:41:49 crc kubenswrapper[4804]: E0128 11:41:49.515174 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e541b2a6-870f-4829-bdfc-ad3e4368ec0b" containerName="neutron-db-sync" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.515181 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="e541b2a6-870f-4829-bdfc-ad3e4368ec0b" containerName="neutron-db-sync" Jan 28 11:41:49 crc kubenswrapper[4804]: E0128 11:41:49.515195 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46956e08-e267-4021-bf42-69a3e35826e0" containerName="init" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.515202 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="46956e08-e267-4021-bf42-69a3e35826e0" containerName="init" Jan 28 11:41:49 crc kubenswrapper[4804]: E0128 11:41:49.515212 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46956e08-e267-4021-bf42-69a3e35826e0" containerName="dnsmasq-dns" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.515218 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="46956e08-e267-4021-bf42-69a3e35826e0" containerName="dnsmasq-dns" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.515385 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="395d12eb-6bd8-4dc2-a026-d37da116fa0d" containerName="glance-httpd" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.515397 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="395d12eb-6bd8-4dc2-a026-d37da116fa0d" containerName="glance-log" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.515411 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="e541b2a6-870f-4829-bdfc-ad3e4368ec0b" containerName="neutron-db-sync" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.515426 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="46956e08-e267-4021-bf42-69a3e35826e0" containerName="dnsmasq-dns" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.516656 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.520832 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.521194 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.530635 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.602521 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qss5\" (UniqueName: \"kubernetes.io/projected/c6e31fe0-ad05-40cd-9eee-1597a421a009-kube-api-access-9qss5\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.602952 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-config-data\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.603144 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.603269 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.603451 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-scripts\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.603588 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.605203 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-logs\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.605370 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.707470 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-scripts\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.707707 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.707796 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-logs\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.707869 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.708060 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.708076 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9qss5\" (UniqueName: \"kubernetes.io/projected/c6e31fe0-ad05-40cd-9eee-1597a421a009-kube-api-access-9qss5\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.708222 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-config-data\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.708264 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.708284 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.708581 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-logs\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.708897 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.712707 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.712996 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.713505 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-scripts\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.716790 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-config-data\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.729692 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9qss5\" (UniqueName: \"kubernetes.io/projected/c6e31fe0-ad05-40cd-9eee-1597a421a009-kube-api-access-9qss5\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.740749 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " pod="openstack/glance-default-external-api-0" Jan 28 11:41:49 crc kubenswrapper[4804]: I0128 11:41:49.842373 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.143506 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cthxz"] Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.152381 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.184853 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cthxz"] Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.232069 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-config\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.232124 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.232579 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.232865 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r99bj\" (UniqueName: \"kubernetes.io/projected/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-kube-api-access-r99bj\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.233152 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.233286 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.337658 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.337729 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.337767 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-config\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.337782 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.337814 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.337849 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r99bj\" (UniqueName: \"kubernetes.io/projected/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-kube-api-access-r99bj\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.339043 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-swift-storage-0\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.339556 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-nb\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.340098 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-config\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.340605 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-sb\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.346351 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-svc\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.367874 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r99bj\" (UniqueName: \"kubernetes.io/projected/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-kube-api-access-r99bj\") pod \"dnsmasq-dns-5ccc5c4795-cthxz\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.405965 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5c6795cf88-vn4sv"] Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.407923 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.411409 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.411442 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.411593 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-pl59s" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.411638 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.437816 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c6795cf88-vn4sv"] Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.479914 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.541652 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-config\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.541701 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-ovndb-tls-certs\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.541763 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-httpd-config\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.541784 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-combined-ca-bundle\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.541863 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv5jh\" (UniqueName: \"kubernetes.io/projected/17438a34-7ac2-4451-b74e-97ebbf9318f3-kube-api-access-fv5jh\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.643829 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-httpd-config\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.643945 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-combined-ca-bundle\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.644052 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv5jh\" (UniqueName: \"kubernetes.io/projected/17438a34-7ac2-4451-b74e-97ebbf9318f3-kube-api-access-fv5jh\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.644103 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-config\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.644127 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-ovndb-tls-certs\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.648856 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-combined-ca-bundle\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.650653 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-ovndb-tls-certs\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.651086 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-config\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.652525 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-httpd-config\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.660386 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv5jh\" (UniqueName: \"kubernetes.io/projected/17438a34-7ac2-4451-b74e-97ebbf9318f3-kube-api-access-fv5jh\") pod \"neutron-5c6795cf88-vn4sv\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.729712 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.928586 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="395d12eb-6bd8-4dc2-a026-d37da116fa0d" path="/var/lib/kubelet/pods/395d12eb-6bd8-4dc2-a026-d37da116fa0d/volumes" Jan 28 11:41:50 crc kubenswrapper[4804]: I0128 11:41:50.929407 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46956e08-e267-4021-bf42-69a3e35826e0" path="/var/lib/kubelet/pods/46956e08-e267-4021-bf42-69a3e35826e0/volumes" Jan 28 11:41:52 crc kubenswrapper[4804]: I0128 11:41:52.110399 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-6d5b6d6b67-b7zpn" podUID="46956e08-e267-4021-bf42-69a3e35826e0" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.128:5353: i/o timeout" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.247892 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6b8bbc97bf-dkp56"] Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.249492 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.253347 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.253984 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.266670 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6b8bbc97bf-dkp56"] Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.310041 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-internal-tls-certs\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.310278 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-httpd-config\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.311054 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-combined-ca-bundle\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.311146 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-ovndb-tls-certs\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.311347 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-public-tls-certs\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.311386 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m86bw\" (UniqueName: \"kubernetes.io/projected/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-kube-api-access-m86bw\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.311468 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-config\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.420778 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-public-tls-certs\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.420817 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m86bw\" (UniqueName: \"kubernetes.io/projected/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-kube-api-access-m86bw\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.420848 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-config\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.420929 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-internal-tls-certs\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.420959 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-httpd-config\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.420989 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-combined-ca-bundle\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.421014 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-ovndb-tls-certs\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.427843 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-combined-ca-bundle\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.427955 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-ovndb-tls-certs\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.428118 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-config\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.428292 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-httpd-config\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.429087 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-public-tls-certs\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.430146 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-internal-tls-certs\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.445178 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m86bw\" (UniqueName: \"kubernetes.io/projected/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-kube-api-access-m86bw\") pod \"neutron-6b8bbc97bf-dkp56\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:53 crc kubenswrapper[4804]: I0128 11:41:53.575968 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:54 crc kubenswrapper[4804]: I0128 11:41:54.513273 4804 scope.go:117] "RemoveContainer" containerID="0e30a6113bcc313e3cf69e2a658168ba99f0082887992b529bd0b556c9a4b494" Jan 28 11:41:54 crc kubenswrapper[4804]: E0128 11:41:54.540369 4804 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Jan 28 11:41:54 crc kubenswrapper[4804]: E0128 11:41:54.540547 4804 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5v9bm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-2swjk_openstack(3bd4fedc-8940-48ad-b718-4fbb98e48bf0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 28 11:41:54 crc kubenswrapper[4804]: E0128 11:41:54.541941 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-2swjk" podUID="3bd4fedc-8940-48ad-b718-4fbb98e48bf0" Jan 28 11:41:54 crc kubenswrapper[4804]: I0128 11:41:54.820326 4804 scope.go:117] "RemoveContainer" containerID="cc071da784f4e94aefc65159f0222cdbc3463f7289d98e8eb421054b7ca1199f" Jan 28 11:41:54 crc kubenswrapper[4804]: I0128 11:41:54.890229 4804 scope.go:117] "RemoveContainer" containerID="2ebff81ae6eb2c2b74f0fe5476b4e0026ec75898d6d7e782d262a6913f2daa22" Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.006715 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-qmm7h"] Jan 28 11:41:55 crc kubenswrapper[4804]: W0128 11:41:55.027966 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8686dbae_d7dd_4662_81a8_ab51cc85a115.slice/crio-955b230c95ad08f43c3097b81f46147f1a68a0186fc7e64fd4a923911e4cf337 WatchSource:0}: Error finding container 955b230c95ad08f43c3097b81f46147f1a68a0186fc7e64fd4a923911e4cf337: Status 404 returned error can't find the container with id 955b230c95ad08f43c3097b81f46147f1a68a0186fc7e64fd4a923911e4cf337 Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.179586 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cthxz"] Jan 28 11:41:55 crc kubenswrapper[4804]: W0128 11:41:55.189591 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1c0b69d_65ba_4cfd_b7d5_b842e64eafb4.slice/crio-712240faee5573c11cecd4f774dbe7152151ce4aa8c358cabe00675975fd0077 WatchSource:0}: Error finding container 712240faee5573c11cecd4f774dbe7152151ce4aa8c358cabe00675975fd0077: Status 404 returned error can't find the container with id 712240faee5573c11cecd4f774dbe7152151ce4aa8c358cabe00675975fd0077 Jan 28 11:41:55 crc kubenswrapper[4804]: W0128 11:41:55.263669 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc6e31fe0_ad05_40cd_9eee_1597a421a009.slice/crio-77deabf65e4246979130b557c75fc43e2d7873b2dc124e7c3da74d90778d94aa WatchSource:0}: Error finding container 77deabf65e4246979130b557c75fc43e2d7873b2dc124e7c3da74d90778d94aa: Status 404 returned error can't find the container with id 77deabf65e4246979130b557c75fc43e2d7873b2dc124e7c3da74d90778d94aa Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.267075 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.446366 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" event={"ID":"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4","Type":"ContainerStarted","Data":"712240faee5573c11cecd4f774dbe7152151ce4aa8c358cabe00675975fd0077"} Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.453688 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9brzz" event={"ID":"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf","Type":"ContainerStarted","Data":"141148b29896e3f2f9d12c3faec258d3e962851d2411ef8203fd3511f78f472c"} Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.458701 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qmm7h" event={"ID":"8686dbae-d7dd-4662-81a8-ab51cc85a115","Type":"ContainerStarted","Data":"905c09b793697a4d6c52520b6966a20f7c9e6354b274348d7425039892c0fbb9"} Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.458746 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qmm7h" event={"ID":"8686dbae-d7dd-4662-81a8-ab51cc85a115","Type":"ContainerStarted","Data":"955b230c95ad08f43c3097b81f46147f1a68a0186fc7e64fd4a923911e4cf337"} Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.472499 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"559981d5-7d2e-4624-a425-53ff3158840a","Type":"ContainerStarted","Data":"353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346"} Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.476730 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6e31fe0-ad05-40cd-9eee-1597a421a009","Type":"ContainerStarted","Data":"77deabf65e4246979130b557c75fc43e2d7873b2dc124e7c3da74d90778d94aa"} Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.491372 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wch49" event={"ID":"6b292a47-f331-472d-941e-193e41fee49f","Type":"ContainerStarted","Data":"c678cbe047e0072936e6685fda5e2cdde34f1bc266bf8023e6e395194b174396"} Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.494103 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-9brzz" podStartSLOduration=2.975824411 podStartE2EDuration="35.494073217s" podCreationTimestamp="2026-01-28 11:41:20 +0000 UTC" firstStartedPulling="2026-01-28 11:41:21.962458298 +0000 UTC m=+1157.757338272" lastFinishedPulling="2026-01-28 11:41:54.480707094 +0000 UTC m=+1190.275587078" observedRunningTime="2026-01-28 11:41:55.487837988 +0000 UTC m=+1191.282717972" watchObservedRunningTime="2026-01-28 11:41:55.494073217 +0000 UTC m=+1191.288953201" Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.494354 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6b8bbc97bf-dkp56"] Jan 28 11:41:55 crc kubenswrapper[4804]: W0128 11:41:55.496929 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1f4e070e_7b0f_4a60_9383_7e1a61380fc6.slice/crio-17370b3c1885a4617f22d7c91afab2fb5a7fa1af0f912912598e79c6bd36be5b WatchSource:0}: Error finding container 17370b3c1885a4617f22d7c91afab2fb5a7fa1af0f912912598e79c6bd36be5b: Status 404 returned error can't find the container with id 17370b3c1885a4617f22d7c91afab2fb5a7fa1af0f912912598e79c6bd36be5b Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.519552 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-qmm7h" podStartSLOduration=25.519526287 podStartE2EDuration="25.519526287s" podCreationTimestamp="2026-01-28 11:41:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:55.507165073 +0000 UTC m=+1191.302045057" watchObservedRunningTime="2026-01-28 11:41:55.519526287 +0000 UTC m=+1191.314406271" Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.520636 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="df00e734-18a4-4614-b272-1d914b5e39ce" containerName="glance-log" containerID="cri-o://97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7" gracePeriod=30 Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.520765 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="df00e734-18a4-4614-b272-1d914b5e39ce" containerName="glance-httpd" containerID="cri-o://633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8" gracePeriod=30 Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.520781 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"df00e734-18a4-4614-b272-1d914b5e39ce","Type":"ContainerStarted","Data":"633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8"} Jan 28 11:41:55 crc kubenswrapper[4804]: E0128 11:41:55.530618 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-2swjk" podUID="3bd4fedc-8940-48ad-b718-4fbb98e48bf0" Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.531122 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-wch49" podStartSLOduration=2.7892619119999997 podStartE2EDuration="35.531106625s" podCreationTimestamp="2026-01-28 11:41:20 +0000 UTC" firstStartedPulling="2026-01-28 11:41:21.780363982 +0000 UTC m=+1157.575243966" lastFinishedPulling="2026-01-28 11:41:54.522208695 +0000 UTC m=+1190.317088679" observedRunningTime="2026-01-28 11:41:55.530590699 +0000 UTC m=+1191.325470683" watchObservedRunningTime="2026-01-28 11:41:55.531106625 +0000 UTC m=+1191.325986609" Jan 28 11:41:55 crc kubenswrapper[4804]: W0128 11:41:55.594374 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod17438a34_7ac2_4451_b74e_97ebbf9318f3.slice/crio-13867d6cc190021437c374394c8fea3e953c59a7abb2355a73dc4ecc5ca39b58 WatchSource:0}: Error finding container 13867d6cc190021437c374394c8fea3e953c59a7abb2355a73dc4ecc5ca39b58: Status 404 returned error can't find the container with id 13867d6cc190021437c374394c8fea3e953c59a7abb2355a73dc4ecc5ca39b58 Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.605022 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5c6795cf88-vn4sv"] Jan 28 11:41:55 crc kubenswrapper[4804]: I0128 11:41:55.617255 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=31.617230417000002 podStartE2EDuration="31.617230417s" podCreationTimestamp="2026-01-28 11:41:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:55.569472467 +0000 UTC m=+1191.364352451" watchObservedRunningTime="2026-01-28 11:41:55.617230417 +0000 UTC m=+1191.412110401" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.245944 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.403149 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-combined-ca-bundle\") pod \"df00e734-18a4-4614-b272-1d914b5e39ce\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.403266 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-scripts\") pod \"df00e734-18a4-4614-b272-1d914b5e39ce\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.403344 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-httpd-run\") pod \"df00e734-18a4-4614-b272-1d914b5e39ce\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.403457 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"df00e734-18a4-4614-b272-1d914b5e39ce\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.403516 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-config-data\") pod \"df00e734-18a4-4614-b272-1d914b5e39ce\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.403534 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-logs\") pod \"df00e734-18a4-4614-b272-1d914b5e39ce\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.403562 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l645w\" (UniqueName: \"kubernetes.io/projected/df00e734-18a4-4614-b272-1d914b5e39ce-kube-api-access-l645w\") pod \"df00e734-18a4-4614-b272-1d914b5e39ce\" (UID: \"df00e734-18a4-4614-b272-1d914b5e39ce\") " Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.403988 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "df00e734-18a4-4614-b272-1d914b5e39ce" (UID: "df00e734-18a4-4614-b272-1d914b5e39ce"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.404041 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-logs" (OuterVolumeSpecName: "logs") pod "df00e734-18a4-4614-b272-1d914b5e39ce" (UID: "df00e734-18a4-4614-b272-1d914b5e39ce"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.409447 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df00e734-18a4-4614-b272-1d914b5e39ce-kube-api-access-l645w" (OuterVolumeSpecName: "kube-api-access-l645w") pod "df00e734-18a4-4614-b272-1d914b5e39ce" (UID: "df00e734-18a4-4614-b272-1d914b5e39ce"). InnerVolumeSpecName "kube-api-access-l645w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.409530 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "df00e734-18a4-4614-b272-1d914b5e39ce" (UID: "df00e734-18a4-4614-b272-1d914b5e39ce"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.415483 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-scripts" (OuterVolumeSpecName: "scripts") pod "df00e734-18a4-4614-b272-1d914b5e39ce" (UID: "df00e734-18a4-4614-b272-1d914b5e39ce"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.428549 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "df00e734-18a4-4614-b272-1d914b5e39ce" (UID: "df00e734-18a4-4614-b272-1d914b5e39ce"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.464797 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-config-data" (OuterVolumeSpecName: "config-data") pod "df00e734-18a4-4614-b272-1d914b5e39ce" (UID: "df00e734-18a4-4614-b272-1d914b5e39ce"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.505408 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.505442 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.505453 4804 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.505483 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.505495 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/df00e734-18a4-4614-b272-1d914b5e39ce-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.505503 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/df00e734-18a4-4614-b272-1d914b5e39ce-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.505512 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l645w\" (UniqueName: \"kubernetes.io/projected/df00e734-18a4-4614-b272-1d914b5e39ce-kube-api-access-l645w\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.527965 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.537112 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8bbc97bf-dkp56" event={"ID":"1f4e070e-7b0f-4a60-9383-7e1a61380fc6","Type":"ContainerStarted","Data":"a77115c93ac5035e08ec037be345ec8297ca1f73ac611f0d1dfc69f51b156d7c"} Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.537166 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8bbc97bf-dkp56" event={"ID":"1f4e070e-7b0f-4a60-9383-7e1a61380fc6","Type":"ContainerStarted","Data":"bf7528745919414ab5c0c5536eb5b3fc9885458114b18b78f9462eb6cff21f37"} Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.537177 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8bbc97bf-dkp56" event={"ID":"1f4e070e-7b0f-4a60-9383-7e1a61380fc6","Type":"ContainerStarted","Data":"17370b3c1885a4617f22d7c91afab2fb5a7fa1af0f912912598e79c6bd36be5b"} Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.537956 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.547459 4804 generic.go:334] "Generic (PLEG): container finished" podID="df00e734-18a4-4614-b272-1d914b5e39ce" containerID="633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8" exitCode=143 Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.547508 4804 generic.go:334] "Generic (PLEG): container finished" podID="df00e734-18a4-4614-b272-1d914b5e39ce" containerID="97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7" exitCode=143 Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.547575 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"df00e734-18a4-4614-b272-1d914b5e39ce","Type":"ContainerDied","Data":"633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8"} Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.547622 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"df00e734-18a4-4614-b272-1d914b5e39ce","Type":"ContainerDied","Data":"97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7"} Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.547642 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"df00e734-18a4-4614-b272-1d914b5e39ce","Type":"ContainerDied","Data":"5fd49444fe3ee038e54635fea87acd84a00e42d07f56ba45b5c0e1dc565c8aac"} Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.547669 4804 scope.go:117] "RemoveContainer" containerID="633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.547898 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.584034 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-6b8bbc97bf-dkp56" podStartSLOduration=3.584014417 podStartE2EDuration="3.584014417s" podCreationTimestamp="2026-01-28 11:41:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:56.576182028 +0000 UTC m=+1192.371062012" watchObservedRunningTime="2026-01-28 11:41:56.584014417 +0000 UTC m=+1192.378894401" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.587380 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c6795cf88-vn4sv" event={"ID":"17438a34-7ac2-4451-b74e-97ebbf9318f3","Type":"ContainerStarted","Data":"0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be"} Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.587426 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c6795cf88-vn4sv" event={"ID":"17438a34-7ac2-4451-b74e-97ebbf9318f3","Type":"ContainerStarted","Data":"a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5"} Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.587437 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c6795cf88-vn4sv" event={"ID":"17438a34-7ac2-4451-b74e-97ebbf9318f3","Type":"ContainerStarted","Data":"13867d6cc190021437c374394c8fea3e953c59a7abb2355a73dc4ecc5ca39b58"} Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.588393 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.589907 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6e31fe0-ad05-40cd-9eee-1597a421a009","Type":"ContainerStarted","Data":"da91222f31b9d2c38c4e6f743c67ffcd04bf815b945ad08e5f7f9977696c9998"} Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.595924 4804 generic.go:334] "Generic (PLEG): container finished" podID="a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" containerID="988bd674e871e03f6b5bd3343c9169f5546e6cb263a24399cbec20a5f0214e6e" exitCode=0 Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.597259 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" event={"ID":"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4","Type":"ContainerDied","Data":"988bd674e871e03f6b5bd3343c9169f5546e6cb263a24399cbec20a5f0214e6e"} Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.610412 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.631833 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5c6795cf88-vn4sv" podStartSLOduration=6.631816728 podStartE2EDuration="6.631816728s" podCreationTimestamp="2026-01-28 11:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:56.630498357 +0000 UTC m=+1192.425378341" watchObservedRunningTime="2026-01-28 11:41:56.631816728 +0000 UTC m=+1192.426696712" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.880951 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.888076 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.897196 4804 scope.go:117] "RemoveContainer" containerID="97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.942090 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df00e734-18a4-4614-b272-1d914b5e39ce" path="/var/lib/kubelet/pods/df00e734-18a4-4614-b272-1d914b5e39ce/volumes" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.954826 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:56 crc kubenswrapper[4804]: E0128 11:41:56.970925 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df00e734-18a4-4614-b272-1d914b5e39ce" containerName="glance-httpd" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.971004 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="df00e734-18a4-4614-b272-1d914b5e39ce" containerName="glance-httpd" Jan 28 11:41:56 crc kubenswrapper[4804]: E0128 11:41:56.971084 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df00e734-18a4-4614-b272-1d914b5e39ce" containerName="glance-log" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.971093 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="df00e734-18a4-4614-b272-1d914b5e39ce" containerName="glance-log" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.971595 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="df00e734-18a4-4614-b272-1d914b5e39ce" containerName="glance-httpd" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.971617 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="df00e734-18a4-4614-b272-1d914b5e39ce" containerName="glance-log" Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.983287 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:56 crc kubenswrapper[4804]: I0128 11:41:56.983411 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.027701 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.028572 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.054157 4804 scope.go:117] "RemoveContainer" containerID="633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8" Jan 28 11:41:57 crc kubenswrapper[4804]: E0128 11:41:57.065121 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8\": container with ID starting with 633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8 not found: ID does not exist" containerID="633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.065176 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8"} err="failed to get container status \"633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8\": rpc error: code = NotFound desc = could not find container \"633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8\": container with ID starting with 633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8 not found: ID does not exist" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.065214 4804 scope.go:117] "RemoveContainer" containerID="97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7" Jan 28 11:41:57 crc kubenswrapper[4804]: E0128 11:41:57.086072 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7\": container with ID starting with 97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7 not found: ID does not exist" containerID="97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.086141 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7"} err="failed to get container status \"97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7\": rpc error: code = NotFound desc = could not find container \"97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7\": container with ID starting with 97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7 not found: ID does not exist" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.086176 4804 scope.go:117] "RemoveContainer" containerID="633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.097118 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8"} err="failed to get container status \"633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8\": rpc error: code = NotFound desc = could not find container \"633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8\": container with ID starting with 633c7871b84cdc683a23912396dd323b373e1d086e0e651fc79fab3a38de81b8 not found: ID does not exist" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.097179 4804 scope.go:117] "RemoveContainer" containerID="97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.107235 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7"} err="failed to get container status \"97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7\": rpc error: code = NotFound desc = could not find container \"97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7\": container with ID starting with 97c2b6de8f0bfa8cd9066394a7db295492478196f61c2cd2819e50e1cabe52f7 not found: ID does not exist" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.240337 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.240764 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-logs\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.241017 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.241275 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.241380 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-scripts\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.241439 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-config-data\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.241465 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgqsx\" (UniqueName: \"kubernetes.io/projected/268e1424-c22b-4694-a27b-e000fae8fc84-kube-api-access-bgqsx\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.241620 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.343519 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-logs\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.343781 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.343903 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.344013 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-scripts\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.344110 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-config-data\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.344197 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bgqsx\" (UniqueName: \"kubernetes.io/projected/268e1424-c22b-4694-a27b-e000fae8fc84-kube-api-access-bgqsx\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.344286 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-logs\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.344319 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.345056 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.345750 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.345759 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.350176 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.350426 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-scripts\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.350709 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.351575 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-config-data\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.363725 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bgqsx\" (UniqueName: \"kubernetes.io/projected/268e1424-c22b-4694-a27b-e000fae8fc84-kube-api-access-bgqsx\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.391386 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.423246 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:41:57 crc kubenswrapper[4804]: I0128 11:41:57.964801 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:41:58 crc kubenswrapper[4804]: I0128 11:41:58.634530 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6e31fe0-ad05-40cd-9eee-1597a421a009","Type":"ContainerStarted","Data":"eb26cedcbdf60c84a6ee55e21403b89acac09cab6b2379020603bb9402535d6a"} Jan 28 11:41:58 crc kubenswrapper[4804]: I0128 11:41:58.639377 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" event={"ID":"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4","Type":"ContainerStarted","Data":"619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6"} Jan 28 11:41:58 crc kubenswrapper[4804]: I0128 11:41:58.639529 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:41:58 crc kubenswrapper[4804]: I0128 11:41:58.645449 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"268e1424-c22b-4694-a27b-e000fae8fc84","Type":"ContainerStarted","Data":"08457c942fd2dfdc67cc5ef01794dd21f74a9395ce618a5b9717e831bcb6d4af"} Jan 28 11:41:58 crc kubenswrapper[4804]: I0128 11:41:58.645516 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"268e1424-c22b-4694-a27b-e000fae8fc84","Type":"ContainerStarted","Data":"69238578a45f6424f2874038dcb7535af5f39f1f664e37959ae69aa2b648befa"} Jan 28 11:41:58 crc kubenswrapper[4804]: I0128 11:41:58.660620 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=9.660596241 podStartE2EDuration="9.660596241s" podCreationTimestamp="2026-01-28 11:41:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:58.657237364 +0000 UTC m=+1194.452117368" watchObservedRunningTime="2026-01-28 11:41:58.660596241 +0000 UTC m=+1194.455476225" Jan 28 11:41:58 crc kubenswrapper[4804]: I0128 11:41:58.688086 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" podStartSLOduration=8.688065155 podStartE2EDuration="8.688065155s" podCreationTimestamp="2026-01-28 11:41:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:58.687804996 +0000 UTC m=+1194.482684980" watchObservedRunningTime="2026-01-28 11:41:58.688065155 +0000 UTC m=+1194.482945139" Jan 28 11:41:59 crc kubenswrapper[4804]: I0128 11:41:59.660488 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"559981d5-7d2e-4624-a425-53ff3158840a","Type":"ContainerStarted","Data":"4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7"} Jan 28 11:41:59 crc kubenswrapper[4804]: I0128 11:41:59.663692 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"268e1424-c22b-4694-a27b-e000fae8fc84","Type":"ContainerStarted","Data":"4052c7d5a660ea4162a986128a1346bc3017e577b5eb525f79ff8ea498d7a5aa"} Jan 28 11:41:59 crc kubenswrapper[4804]: I0128 11:41:59.671183 4804 generic.go:334] "Generic (PLEG): container finished" podID="8686dbae-d7dd-4662-81a8-ab51cc85a115" containerID="905c09b793697a4d6c52520b6966a20f7c9e6354b274348d7425039892c0fbb9" exitCode=0 Jan 28 11:41:59 crc kubenswrapper[4804]: I0128 11:41:59.671290 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qmm7h" event={"ID":"8686dbae-d7dd-4662-81a8-ab51cc85a115","Type":"ContainerDied","Data":"905c09b793697a4d6c52520b6966a20f7c9e6354b274348d7425039892c0fbb9"} Jan 28 11:41:59 crc kubenswrapper[4804]: I0128 11:41:59.700459 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.700424026 podStartE2EDuration="3.700424026s" podCreationTimestamp="2026-01-28 11:41:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:41:59.686797462 +0000 UTC m=+1195.481677446" watchObservedRunningTime="2026-01-28 11:41:59.700424026 +0000 UTC m=+1195.495304010" Jan 28 11:41:59 crc kubenswrapper[4804]: I0128 11:41:59.843056 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 11:41:59 crc kubenswrapper[4804]: I0128 11:41:59.843187 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 11:41:59 crc kubenswrapper[4804]: I0128 11:41:59.875358 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 11:41:59 crc kubenswrapper[4804]: I0128 11:41:59.887351 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 11:42:00 crc kubenswrapper[4804]: I0128 11:42:00.682930 4804 generic.go:334] "Generic (PLEG): container finished" podID="6b292a47-f331-472d-941e-193e41fee49f" containerID="c678cbe047e0072936e6685fda5e2cdde34f1bc266bf8023e6e395194b174396" exitCode=0 Jan 28 11:42:00 crc kubenswrapper[4804]: I0128 11:42:00.682936 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wch49" event={"ID":"6b292a47-f331-472d-941e-193e41fee49f","Type":"ContainerDied","Data":"c678cbe047e0072936e6685fda5e2cdde34f1bc266bf8023e6e395194b174396"} Jan 28 11:42:00 crc kubenswrapper[4804]: I0128 11:42:00.683339 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 11:42:00 crc kubenswrapper[4804]: I0128 11:42:00.683641 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.069185 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.132282 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-fernet-keys\") pod \"8686dbae-d7dd-4662-81a8-ab51cc85a115\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.133099 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4hmv\" (UniqueName: \"kubernetes.io/projected/8686dbae-d7dd-4662-81a8-ab51cc85a115-kube-api-access-m4hmv\") pod \"8686dbae-d7dd-4662-81a8-ab51cc85a115\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.133290 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-config-data\") pod \"8686dbae-d7dd-4662-81a8-ab51cc85a115\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.133331 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-credential-keys\") pod \"8686dbae-d7dd-4662-81a8-ab51cc85a115\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.133374 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-scripts\") pod \"8686dbae-d7dd-4662-81a8-ab51cc85a115\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.133665 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-combined-ca-bundle\") pod \"8686dbae-d7dd-4662-81a8-ab51cc85a115\" (UID: \"8686dbae-d7dd-4662-81a8-ab51cc85a115\") " Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.139860 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "8686dbae-d7dd-4662-81a8-ab51cc85a115" (UID: "8686dbae-d7dd-4662-81a8-ab51cc85a115"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.140156 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "8686dbae-d7dd-4662-81a8-ab51cc85a115" (UID: "8686dbae-d7dd-4662-81a8-ab51cc85a115"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.140397 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-scripts" (OuterVolumeSpecName: "scripts") pod "8686dbae-d7dd-4662-81a8-ab51cc85a115" (UID: "8686dbae-d7dd-4662-81a8-ab51cc85a115"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.141795 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8686dbae-d7dd-4662-81a8-ab51cc85a115-kube-api-access-m4hmv" (OuterVolumeSpecName: "kube-api-access-m4hmv") pod "8686dbae-d7dd-4662-81a8-ab51cc85a115" (UID: "8686dbae-d7dd-4662-81a8-ab51cc85a115"). InnerVolumeSpecName "kube-api-access-m4hmv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.167492 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-config-data" (OuterVolumeSpecName: "config-data") pod "8686dbae-d7dd-4662-81a8-ab51cc85a115" (UID: "8686dbae-d7dd-4662-81a8-ab51cc85a115"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.184377 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8686dbae-d7dd-4662-81a8-ab51cc85a115" (UID: "8686dbae-d7dd-4662-81a8-ab51cc85a115"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.236893 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.236932 4804 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.236942 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4hmv\" (UniqueName: \"kubernetes.io/projected/8686dbae-d7dd-4662-81a8-ab51cc85a115-kube-api-access-m4hmv\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.236953 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.236963 4804 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.236974 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8686dbae-d7dd-4662-81a8-ab51cc85a115-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.693591 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-qmm7h" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.694310 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-qmm7h" event={"ID":"8686dbae-d7dd-4662-81a8-ab51cc85a115","Type":"ContainerDied","Data":"955b230c95ad08f43c3097b81f46147f1a68a0186fc7e64fd4a923911e4cf337"} Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.694333 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="955b230c95ad08f43c3097b81f46147f1a68a0186fc7e64fd4a923911e4cf337" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.912416 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6f885d959c-vhjh4"] Jan 28 11:42:01 crc kubenswrapper[4804]: E0128 11:42:01.913202 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8686dbae-d7dd-4662-81a8-ab51cc85a115" containerName="keystone-bootstrap" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.913224 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="8686dbae-d7dd-4662-81a8-ab51cc85a115" containerName="keystone-bootstrap" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.913467 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="8686dbae-d7dd-4662-81a8-ab51cc85a115" containerName="keystone-bootstrap" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.914127 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.918350 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-xcgbx" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.918525 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.918767 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.918963 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.919113 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.919229 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Jan 28 11:42:01 crc kubenswrapper[4804]: I0128 11:42:01.921273 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6f885d959c-vhjh4"] Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.053179 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx5wp\" (UniqueName: \"kubernetes.io/projected/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-kube-api-access-qx5wp\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.053311 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-public-tls-certs\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.053378 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-combined-ca-bundle\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.053405 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-internal-tls-certs\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.053455 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-fernet-keys\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.053497 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-credential-keys\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.053524 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-config-data\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.053555 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-scripts\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.155170 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-config-data\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.155242 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-scripts\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.155291 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx5wp\" (UniqueName: \"kubernetes.io/projected/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-kube-api-access-qx5wp\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.155378 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-public-tls-certs\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.155457 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-combined-ca-bundle\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.155488 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-internal-tls-certs\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.156557 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-fernet-keys\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.156650 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-credential-keys\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.161122 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-internal-tls-certs\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.161931 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-public-tls-certs\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.162754 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-combined-ca-bundle\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.163815 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-credential-keys\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.165394 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-scripts\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.169519 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-fernet-keys\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.175933 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx5wp\" (UniqueName: \"kubernetes.io/projected/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-kube-api-access-qx5wp\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.176336 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-config-data\") pod \"keystone-6f885d959c-vhjh4\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: E0128 11:42:02.195385 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod72b9a8c6_1dc2_4083_9cbe_0564721ef7bf.slice/crio-141148b29896e3f2f9d12c3faec258d3e962851d2411ef8203fd3511f78f472c.scope\": RecentStats: unable to find data in memory cache]" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.252868 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.709012 4804 generic.go:334] "Generic (PLEG): container finished" podID="72b9a8c6-1dc2-4083-9cbe-0564721ef7bf" containerID="141148b29896e3f2f9d12c3faec258d3e962851d2411ef8203fd3511f78f472c" exitCode=0 Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.709333 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9brzz" event={"ID":"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf","Type":"ContainerDied","Data":"141148b29896e3f2f9d12c3faec258d3e962851d2411ef8203fd3511f78f472c"} Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.841603 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wch49" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.974331 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-config-data\") pod \"6b292a47-f331-472d-941e-193e41fee49f\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.974388 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-scripts\") pod \"6b292a47-f331-472d-941e-193e41fee49f\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.974470 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjk67\" (UniqueName: \"kubernetes.io/projected/6b292a47-f331-472d-941e-193e41fee49f-kube-api-access-cjk67\") pod \"6b292a47-f331-472d-941e-193e41fee49f\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.974664 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-combined-ca-bundle\") pod \"6b292a47-f331-472d-941e-193e41fee49f\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.974748 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b292a47-f331-472d-941e-193e41fee49f-logs\") pod \"6b292a47-f331-472d-941e-193e41fee49f\" (UID: \"6b292a47-f331-472d-941e-193e41fee49f\") " Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.975651 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b292a47-f331-472d-941e-193e41fee49f-logs" (OuterVolumeSpecName: "logs") pod "6b292a47-f331-472d-941e-193e41fee49f" (UID: "6b292a47-f331-472d-941e-193e41fee49f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:02 crc kubenswrapper[4804]: I0128 11:42:02.980262 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b292a47-f331-472d-941e-193e41fee49f-kube-api-access-cjk67" (OuterVolumeSpecName: "kube-api-access-cjk67") pod "6b292a47-f331-472d-941e-193e41fee49f" (UID: "6b292a47-f331-472d-941e-193e41fee49f"). InnerVolumeSpecName "kube-api-access-cjk67". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:03 crc kubenswrapper[4804]: I0128 11:42:03.001756 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-scripts" (OuterVolumeSpecName: "scripts") pod "6b292a47-f331-472d-941e-193e41fee49f" (UID: "6b292a47-f331-472d-941e-193e41fee49f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:03 crc kubenswrapper[4804]: I0128 11:42:03.003571 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b292a47-f331-472d-941e-193e41fee49f" (UID: "6b292a47-f331-472d-941e-193e41fee49f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:03 crc kubenswrapper[4804]: I0128 11:42:03.005488 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-config-data" (OuterVolumeSpecName: "config-data") pod "6b292a47-f331-472d-941e-193e41fee49f" (UID: "6b292a47-f331-472d-941e-193e41fee49f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:03 crc kubenswrapper[4804]: I0128 11:42:03.077407 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:03 crc kubenswrapper[4804]: I0128 11:42:03.077438 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:03 crc kubenswrapper[4804]: I0128 11:42:03.077448 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjk67\" (UniqueName: \"kubernetes.io/projected/6b292a47-f331-472d-941e-193e41fee49f-kube-api-access-cjk67\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:03 crc kubenswrapper[4804]: I0128 11:42:03.077457 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b292a47-f331-472d-941e-193e41fee49f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:03 crc kubenswrapper[4804]: I0128 11:42:03.077466 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6b292a47-f331-472d-941e-193e41fee49f-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:03 crc kubenswrapper[4804]: I0128 11:42:03.744326 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wch49" Jan 28 11:42:03 crc kubenswrapper[4804]: I0128 11:42:03.748575 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wch49" event={"ID":"6b292a47-f331-472d-941e-193e41fee49f","Type":"ContainerDied","Data":"4fc600c8f61a9b8ec6d1ffdf93634fa090aed774c9c2a83b4350fad5ad1161a7"} Jan 28 11:42:03 crc kubenswrapper[4804]: I0128 11:42:03.748629 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4fc600c8f61a9b8ec6d1ffdf93634fa090aed774c9c2a83b4350fad5ad1161a7" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.161656 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-659f7cffd6-wm9cj"] Jan 28 11:42:04 crc kubenswrapper[4804]: E0128 11:42:04.175534 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b292a47-f331-472d-941e-193e41fee49f" containerName="placement-db-sync" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.175570 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b292a47-f331-472d-941e-193e41fee49f" containerName="placement-db-sync" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.175860 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b292a47-f331-472d-941e-193e41fee49f" containerName="placement-db-sync" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.178324 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.181195 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.181454 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.181782 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.182810 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-682gl" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.183759 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.198586 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-combined-ca-bundle\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.198672 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffjll\" (UniqueName: \"kubernetes.io/projected/280cd1a0-6761-425c-8de1-bec2307ba0c0-kube-api-access-ffjll\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.198730 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-internal-tls-certs\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.198781 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-public-tls-certs\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.198815 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-config-data\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.198932 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-scripts\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.198953 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/280cd1a0-6761-425c-8de1-bec2307ba0c0-logs\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.204915 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-659f7cffd6-wm9cj"] Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.277540 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.294159 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.311919 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-config-data\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.312057 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-scripts\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.312087 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/280cd1a0-6761-425c-8de1-bec2307ba0c0-logs\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.312189 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-combined-ca-bundle\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.312231 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ffjll\" (UniqueName: \"kubernetes.io/projected/280cd1a0-6761-425c-8de1-bec2307ba0c0-kube-api-access-ffjll\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.312283 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-internal-tls-certs\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.312317 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-public-tls-certs\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.314008 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/280cd1a0-6761-425c-8de1-bec2307ba0c0-logs\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.323768 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-internal-tls-certs\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.324143 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-public-tls-certs\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.324625 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-config-data\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.329587 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-scripts\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.339615 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-combined-ca-bundle\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.348246 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ffjll\" (UniqueName: \"kubernetes.io/projected/280cd1a0-6761-425c-8de1-bec2307ba0c0-kube-api-access-ffjll\") pod \"placement-659f7cffd6-wm9cj\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:04 crc kubenswrapper[4804]: I0128 11:42:04.516456 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.096596 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9brzz" Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.125487 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-db-sync-config-data\") pod \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.125532 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-combined-ca-bundle\") pod \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.125637 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5mth7\" (UniqueName: \"kubernetes.io/projected/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-kube-api-access-5mth7\") pod \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\" (UID: \"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf\") " Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.130436 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-kube-api-access-5mth7" (OuterVolumeSpecName: "kube-api-access-5mth7") pod "72b9a8c6-1dc2-4083-9cbe-0564721ef7bf" (UID: "72b9a8c6-1dc2-4083-9cbe-0564721ef7bf"). InnerVolumeSpecName "kube-api-access-5mth7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.140063 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "72b9a8c6-1dc2-4083-9cbe-0564721ef7bf" (UID: "72b9a8c6-1dc2-4083-9cbe-0564721ef7bf"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.180065 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "72b9a8c6-1dc2-4083-9cbe-0564721ef7bf" (UID: "72b9a8c6-1dc2-4083-9cbe-0564721ef7bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.228540 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5mth7\" (UniqueName: \"kubernetes.io/projected/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-kube-api-access-5mth7\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.228683 4804 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.228701 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.483123 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.554266 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-85r5r"] Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.554698 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" podUID="91b4be5e-0f8c-495e-869d-38a047276f33" containerName="dnsmasq-dns" containerID="cri-o://793e56501f603507de08462d6163102c6e75fc7f6d8874ef3f2a6c93cde5476a" gracePeriod=10 Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.632530 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6f885d959c-vhjh4"] Jan 28 11:42:05 crc kubenswrapper[4804]: W0128 11:42:05.635086 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4efe85dc_b64c_4cbe_83f7_89fa462a95a0.slice/crio-7c39859c40631f277cb9db7ae157687f468c42e18dd7308227c1bac58d71a744 WatchSource:0}: Error finding container 7c39859c40631f277cb9db7ae157687f468c42e18dd7308227c1bac58d71a744: Status 404 returned error can't find the container with id 7c39859c40631f277cb9db7ae157687f468c42e18dd7308227c1bac58d71a744 Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.688146 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-659f7cffd6-wm9cj"] Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.771251 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-9brzz" event={"ID":"72b9a8c6-1dc2-4083-9cbe-0564721ef7bf","Type":"ContainerDied","Data":"17e0e19fde7a47cbcc9cf6fab97dc7b7cdb474a5ae0195fdbdcd149f07b46b07"} Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.771286 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="17e0e19fde7a47cbcc9cf6fab97dc7b7cdb474a5ae0195fdbdcd149f07b46b07" Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.771336 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-9brzz" Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.782031 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659f7cffd6-wm9cj" event={"ID":"280cd1a0-6761-425c-8de1-bec2307ba0c0","Type":"ContainerStarted","Data":"326e140f9daa666bf3c0b563922935205ab7fc5dba38cc45fd96d0a13dcbd798"} Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.786213 4804 generic.go:334] "Generic (PLEG): container finished" podID="91b4be5e-0f8c-495e-869d-38a047276f33" containerID="793e56501f603507de08462d6163102c6e75fc7f6d8874ef3f2a6c93cde5476a" exitCode=0 Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.786301 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" event={"ID":"91b4be5e-0f8c-495e-869d-38a047276f33","Type":"ContainerDied","Data":"793e56501f603507de08462d6163102c6e75fc7f6d8874ef3f2a6c93cde5476a"} Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.788240 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f885d959c-vhjh4" event={"ID":"4efe85dc-b64c-4cbe-83f7-89fa462a95a0","Type":"ContainerStarted","Data":"7c39859c40631f277cb9db7ae157687f468c42e18dd7308227c1bac58d71a744"} Jan 28 11:42:05 crc kubenswrapper[4804]: I0128 11:42:05.816915 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"559981d5-7d2e-4624-a425-53ff3158840a","Type":"ContainerStarted","Data":"cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111"} Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.038162 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.052171 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-sb\") pod \"91b4be5e-0f8c-495e-869d-38a047276f33\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.052282 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-config\") pod \"91b4be5e-0f8c-495e-869d-38a047276f33\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.052375 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-svc\") pod \"91b4be5e-0f8c-495e-869d-38a047276f33\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.052417 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-swift-storage-0\") pod \"91b4be5e-0f8c-495e-869d-38a047276f33\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.052462 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8k47b\" (UniqueName: \"kubernetes.io/projected/91b4be5e-0f8c-495e-869d-38a047276f33-kube-api-access-8k47b\") pod \"91b4be5e-0f8c-495e-869d-38a047276f33\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.052518 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-nb\") pod \"91b4be5e-0f8c-495e-869d-38a047276f33\" (UID: \"91b4be5e-0f8c-495e-869d-38a047276f33\") " Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.069448 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/91b4be5e-0f8c-495e-869d-38a047276f33-kube-api-access-8k47b" (OuterVolumeSpecName: "kube-api-access-8k47b") pod "91b4be5e-0f8c-495e-869d-38a047276f33" (UID: "91b4be5e-0f8c-495e-869d-38a047276f33"). InnerVolumeSpecName "kube-api-access-8k47b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.125141 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "91b4be5e-0f8c-495e-869d-38a047276f33" (UID: "91b4be5e-0f8c-495e-869d-38a047276f33"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.154942 4804 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.155394 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8k47b\" (UniqueName: \"kubernetes.io/projected/91b4be5e-0f8c-495e-869d-38a047276f33-kube-api-access-8k47b\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.166223 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "91b4be5e-0f8c-495e-869d-38a047276f33" (UID: "91b4be5e-0f8c-495e-869d-38a047276f33"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.168621 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "91b4be5e-0f8c-495e-869d-38a047276f33" (UID: "91b4be5e-0f8c-495e-869d-38a047276f33"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.178794 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-config" (OuterVolumeSpecName: "config") pod "91b4be5e-0f8c-495e-869d-38a047276f33" (UID: "91b4be5e-0f8c-495e-869d-38a047276f33"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.195832 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "91b4be5e-0f8c-495e-869d-38a047276f33" (UID: "91b4be5e-0f8c-495e-869d-38a047276f33"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.259461 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.259513 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.259528 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.259539 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/91b4be5e-0f8c-495e-869d-38a047276f33-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.342578 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-5f7496d4bd-26fnt"] Jan 28 11:42:06 crc kubenswrapper[4804]: E0128 11:42:06.343052 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72b9a8c6-1dc2-4083-9cbe-0564721ef7bf" containerName="barbican-db-sync" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.343072 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="72b9a8c6-1dc2-4083-9cbe-0564721ef7bf" containerName="barbican-db-sync" Jan 28 11:42:06 crc kubenswrapper[4804]: E0128 11:42:06.343111 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91b4be5e-0f8c-495e-869d-38a047276f33" containerName="init" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.343119 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="91b4be5e-0f8c-495e-869d-38a047276f33" containerName="init" Jan 28 11:42:06 crc kubenswrapper[4804]: E0128 11:42:06.343131 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="91b4be5e-0f8c-495e-869d-38a047276f33" containerName="dnsmasq-dns" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.343139 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="91b4be5e-0f8c-495e-869d-38a047276f33" containerName="dnsmasq-dns" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.343359 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="91b4be5e-0f8c-495e-869d-38a047276f33" containerName="dnsmasq-dns" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.343394 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="72b9a8c6-1dc2-4083-9cbe-0564721ef7bf" containerName="barbican-db-sync" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.344684 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.351183 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-8f675b957-rm9qp"] Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.352871 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.354840 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.355095 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-rvw8m" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.355223 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.356161 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.362347 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f7496d4bd-26fnt"] Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.364438 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dff5q\" (UniqueName: \"kubernetes.io/projected/82ef8b43-de59-45f8-9c2a-765c5709054b-kube-api-access-dff5q\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.364483 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-combined-ca-bundle\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.364511 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82ef8b43-de59-45f8-9c2a-765c5709054b-logs\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.364562 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.364597 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data-custom\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.377579 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8f675b957-rm9qp"] Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.466393 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.466433 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4jlrd\" (UniqueName: \"kubernetes.io/projected/878daeff-34bf-4dab-8118-e42c318849bb-kube-api-access-4jlrd\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.466474 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dff5q\" (UniqueName: \"kubernetes.io/projected/82ef8b43-de59-45f8-9c2a-765c5709054b-kube-api-access-dff5q\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.466506 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-combined-ca-bundle\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.466530 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82ef8b43-de59-45f8-9c2a-765c5709054b-logs\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.466573 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-combined-ca-bundle\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.466592 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.466622 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data-custom\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.466657 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data-custom\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.466689 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/878daeff-34bf-4dab-8118-e42c318849bb-logs\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.468414 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82ef8b43-de59-45f8-9c2a-765c5709054b-logs\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.481033 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.488703 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-combined-ca-bundle\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.497378 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dff5q\" (UniqueName: \"kubernetes.io/projected/82ef8b43-de59-45f8-9c2a-765c5709054b-kube-api-access-dff5q\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.498613 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data-custom\") pod \"barbican-keystone-listener-5f7496d4bd-26fnt\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.545666 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-m7bk5"] Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.547642 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.569924 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-m7bk5"] Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.570042 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.570088 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.570111 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4jlrd\" (UniqueName: \"kubernetes.io/projected/878daeff-34bf-4dab-8118-e42c318849bb-kube-api-access-4jlrd\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.570177 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.570201 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-combined-ca-bundle\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.570221 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj625\" (UniqueName: \"kubernetes.io/projected/7da1add4-521f-473c-8694-ccecf71fce93-kube-api-access-hj625\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.570248 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-config\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.570287 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.570308 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data-custom\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.570331 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-svc\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.570355 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/878daeff-34bf-4dab-8118-e42c318849bb-logs\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.572362 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/878daeff-34bf-4dab-8118-e42c318849bb-logs\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.580733 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.591545 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-combined-ca-bundle\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.599302 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data-custom\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.628038 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4jlrd\" (UniqueName: \"kubernetes.io/projected/878daeff-34bf-4dab-8118-e42c318849bb-kube-api-access-4jlrd\") pod \"barbican-worker-8f675b957-rm9qp\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.628786 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.662657 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.672398 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.672480 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj625\" (UniqueName: \"kubernetes.io/projected/7da1add4-521f-473c-8694-ccecf71fce93-kube-api-access-hj625\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.672524 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-config\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.672588 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.672623 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-svc\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.672680 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.673836 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-swift-storage-0\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.682643 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-config\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.683138 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-svc\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.683798 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-sb\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.684515 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-nb\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.717642 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj625\" (UniqueName: \"kubernetes.io/projected/7da1add4-521f-473c-8694-ccecf71fce93-kube-api-access-hj625\") pod \"dnsmasq-dns-688c87cc99-m7bk5\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.725298 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-58c46c5cc8-bpsgv"] Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.727301 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.736030 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.783101 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data-custom\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.783180 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th556\" (UniqueName: \"kubernetes.io/projected/96c46652-7506-4118-a507-a5f2b6668c78-kube-api-access-th556\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.783292 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-combined-ca-bundle\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.783380 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96c46652-7506-4118-a507-a5f2b6668c78-logs\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.783499 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.791130 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58c46c5cc8-bpsgv"] Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.885035 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96c46652-7506-4118-a507-a5f2b6668c78-logs\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.885126 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.885169 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data-custom\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.885191 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th556\" (UniqueName: \"kubernetes.io/projected/96c46652-7506-4118-a507-a5f2b6668c78-kube-api-access-th556\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.885247 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-combined-ca-bundle\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.886512 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96c46652-7506-4118-a507-a5f2b6668c78-logs\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.890585 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data-custom\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.896568 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-combined-ca-bundle\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.897286 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.913753 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f885d959c-vhjh4" event={"ID":"4efe85dc-b64c-4cbe-83f7-89fa462a95a0","Type":"ContainerStarted","Data":"31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3"} Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.933560 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th556\" (UniqueName: \"kubernetes.io/projected/96c46652-7506-4118-a507-a5f2b6668c78-kube-api-access-th556\") pod \"barbican-api-58c46c5cc8-bpsgv\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.937627 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.955474 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659f7cffd6-wm9cj" event={"ID":"280cd1a0-6761-425c-8de1-bec2307ba0c0","Type":"ContainerStarted","Data":"2bc2f4bef5b6e11721d8eabaa519e6625f7ff953fd015c6be0cebef1e6ec65fa"} Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.955524 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659f7cffd6-wm9cj" event={"ID":"280cd1a0-6761-425c-8de1-bec2307ba0c0","Type":"ContainerStarted","Data":"54143a992b19966c4a0488e9860a42a6b4166527e948b9a61cc651fb19353896"} Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.956454 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.956493 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.968626 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" event={"ID":"91b4be5e-0f8c-495e-869d-38a047276f33","Type":"ContainerDied","Data":"4daa812f368862258a1d55a15b7d75718ffc99b127c66500a75ea826f368eb02"} Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.978138 4804 scope.go:117] "RemoveContainer" containerID="793e56501f603507de08462d6163102c6e75fc7f6d8874ef3f2a6c93cde5476a" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.976572 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57c957c4ff-85r5r" Jan 28 11:42:06 crc kubenswrapper[4804]: I0128 11:42:06.995122 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6f885d959c-vhjh4" podStartSLOduration=5.99510069 podStartE2EDuration="5.99510069s" podCreationTimestamp="2026-01-28 11:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:06.964369872 +0000 UTC m=+1202.759249856" watchObservedRunningTime="2026-01-28 11:42:06.99510069 +0000 UTC m=+1202.789980674" Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.009177 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.039408 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-659f7cffd6-wm9cj" podStartSLOduration=3.039387009 podStartE2EDuration="3.039387009s" podCreationTimestamp="2026-01-28 11:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:07.010474259 +0000 UTC m=+1202.805354243" watchObservedRunningTime="2026-01-28 11:42:07.039387009 +0000 UTC m=+1202.834266993" Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.051534 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-85r5r"] Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.084819 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57c957c4ff-85r5r"] Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.118370 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.201477 4804 scope.go:117] "RemoveContainer" containerID="653ef14818c2af14b35bf5c8eff2142bb2b6b74279ede6a70a0def4afe23f6e5" Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.237266 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-5f7496d4bd-26fnt"] Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.424070 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.433448 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.488595 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.504193 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.544588 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-8f675b957-rm9qp"] Jan 28 11:42:07 crc kubenswrapper[4804]: W0128 11:42:07.549176 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod878daeff_34bf_4dab_8118_e42c318849bb.slice/crio-a5da49e2449f72f707c273e1370e4a7b62de12d82629f0770fb413435e71898d WatchSource:0}: Error finding container a5da49e2449f72f707c273e1370e4a7b62de12d82629f0770fb413435e71898d: Status 404 returned error can't find the container with id a5da49e2449f72f707c273e1370e4a7b62de12d82629f0770fb413435e71898d Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.830533 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-m7bk5"] Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.921434 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-58c46c5cc8-bpsgv"] Jan 28 11:42:07 crc kubenswrapper[4804]: W0128 11:42:07.934781 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod96c46652_7506_4118_a507_a5f2b6668c78.slice/crio-372d567d59c59f91cd799085e6762b25e3f3df3f1a5319b2592a5c634b618a46 WatchSource:0}: Error finding container 372d567d59c59f91cd799085e6762b25e3f3df3f1a5319b2592a5c634b618a46: Status 404 returned error can't find the container with id 372d567d59c59f91cd799085e6762b25e3f3df3f1a5319b2592a5c634b618a46 Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.988264 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2swjk" event={"ID":"3bd4fedc-8940-48ad-b718-4fbb98e48bf0","Type":"ContainerStarted","Data":"dc599447325170297407d10ffc4cdfee6dcb5608ba938fdf91f777cfd7556821"} Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.989806 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58c46c5cc8-bpsgv" event={"ID":"96c46652-7506-4118-a507-a5f2b6668c78","Type":"ContainerStarted","Data":"372d567d59c59f91cd799085e6762b25e3f3df3f1a5319b2592a5c634b618a46"} Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.993135 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" event={"ID":"7da1add4-521f-473c-8694-ccecf71fce93","Type":"ContainerStarted","Data":"613d25f46f67af98ce70f3f5abf8d934501d6069e147f6af856f94fa63cd3fb2"} Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.994790 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8f675b957-rm9qp" event={"ID":"878daeff-34bf-4dab-8118-e42c318849bb","Type":"ContainerStarted","Data":"a5da49e2449f72f707c273e1370e4a7b62de12d82629f0770fb413435e71898d"} Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.997005 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" event={"ID":"82ef8b43-de59-45f8-9c2a-765c5709054b","Type":"ContainerStarted","Data":"0556907b161f5a19bd7e76c946764eabb51dab90af80f30118fa8d78582a879a"} Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.997472 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:07 crc kubenswrapper[4804]: I0128 11:42:07.997489 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:08 crc kubenswrapper[4804]: I0128 11:42:08.012989 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-2swjk" podStartSLOduration=3.10179661 podStartE2EDuration="48.012968356s" podCreationTimestamp="2026-01-28 11:41:20 +0000 UTC" firstStartedPulling="2026-01-28 11:41:21.484412843 +0000 UTC m=+1157.279292827" lastFinishedPulling="2026-01-28 11:42:06.395584589 +0000 UTC m=+1202.190464573" observedRunningTime="2026-01-28 11:42:08.004901799 +0000 UTC m=+1203.799781793" watchObservedRunningTime="2026-01-28 11:42:08.012968356 +0000 UTC m=+1203.807848340" Jan 28 11:42:08 crc kubenswrapper[4804]: I0128 11:42:08.940344 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="91b4be5e-0f8c-495e-869d-38a047276f33" path="/var/lib/kubelet/pods/91b4be5e-0f8c-495e-869d-38a047276f33/volumes" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.024962 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58c46c5cc8-bpsgv" event={"ID":"96c46652-7506-4118-a507-a5f2b6668c78","Type":"ContainerStarted","Data":"8d2ae7c993c91ad0905eca59b162f3cd21ce7469700e79350f3b9f5b4056fdb6"} Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.025007 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58c46c5cc8-bpsgv" event={"ID":"96c46652-7506-4118-a507-a5f2b6668c78","Type":"ContainerStarted","Data":"c025f7a64dc8e48661b9d0d9b892d691c7104b5918c48e37213587b764f98ac2"} Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.026504 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.026583 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.034415 4804 generic.go:334] "Generic (PLEG): container finished" podID="7da1add4-521f-473c-8694-ccecf71fce93" containerID="e95cc363dac842375743e8314956bb8d9f168054cc0e4b1f83fe0a24457640be" exitCode=0 Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.035571 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" event={"ID":"7da1add4-521f-473c-8694-ccecf71fce93","Type":"ContainerDied","Data":"e95cc363dac842375743e8314956bb8d9f168054cc0e4b1f83fe0a24457640be"} Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.058892 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-58c46c5cc8-bpsgv" podStartSLOduration=3.058860474 podStartE2EDuration="3.058860474s" podCreationTimestamp="2026-01-28 11:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:09.051348855 +0000 UTC m=+1204.846228839" watchObservedRunningTime="2026-01-28 11:42:09.058860474 +0000 UTC m=+1204.853740458" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.670254 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-7bd5b5bf44-5z4wx"] Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.672435 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.674534 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.675051 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.689702 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7bd5b5bf44-5z4wx"] Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.801015 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.801066 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-public-tls-certs\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.801086 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkk99\" (UniqueName: \"kubernetes.io/projected/bb3c1e4d-637e-4de6-aa37-7daff5298b30-kube-api-access-qkk99\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.801109 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-internal-tls-certs\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.801180 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data-custom\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.801208 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb3c1e4d-637e-4de6-aa37-7daff5298b30-logs\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.801282 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-combined-ca-bundle\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.904987 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data-custom\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.905048 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb3c1e4d-637e-4de6-aa37-7daff5298b30-logs\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.905200 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-combined-ca-bundle\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.905852 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb3c1e4d-637e-4de6-aa37-7daff5298b30-logs\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.906116 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.906164 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-public-tls-certs\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.906183 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkk99\" (UniqueName: \"kubernetes.io/projected/bb3c1e4d-637e-4de6-aa37-7daff5298b30-kube-api-access-qkk99\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.906220 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-internal-tls-certs\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.909853 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-internal-tls-certs\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.911723 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-combined-ca-bundle\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.919699 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-public-tls-certs\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.922216 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data-custom\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.922568 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkk99\" (UniqueName: \"kubernetes.io/projected/bb3c1e4d-637e-4de6-aa37-7daff5298b30-kube-api-access-qkk99\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:09 crc kubenswrapper[4804]: I0128 11:42:09.922741 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data\") pod \"barbican-api-7bd5b5bf44-5z4wx\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:10 crc kubenswrapper[4804]: I0128 11:42:10.045492 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:10 crc kubenswrapper[4804]: I0128 11:42:10.061116 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" event={"ID":"7da1add4-521f-473c-8694-ccecf71fce93","Type":"ContainerStarted","Data":"a14505e3f8f6847755a9c7ba2c0dbd679286c5c859ffe247846891adcf951765"} Jan 28 11:42:10 crc kubenswrapper[4804]: I0128 11:42:10.061283 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:42:10 crc kubenswrapper[4804]: I0128 11:42:10.061299 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:42:10 crc kubenswrapper[4804]: I0128 11:42:10.062522 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:10 crc kubenswrapper[4804]: I0128 11:42:10.086445 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" podStartSLOduration=4.08642703 podStartE2EDuration="4.08642703s" podCreationTimestamp="2026-01-28 11:42:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:10.079456188 +0000 UTC m=+1205.874336172" watchObservedRunningTime="2026-01-28 11:42:10.08642703 +0000 UTC m=+1205.881307004" Jan 28 11:42:10 crc kubenswrapper[4804]: I0128 11:42:10.527939 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:10 crc kubenswrapper[4804]: I0128 11:42:10.546637 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:11 crc kubenswrapper[4804]: I0128 11:42:11.068620 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8f675b957-rm9qp" event={"ID":"878daeff-34bf-4dab-8118-e42c318849bb","Type":"ContainerStarted","Data":"f4726c69a403b9a8eefc4f17886ef00a383e10ea26adf572bdfed7ea1d3723a8"} Jan 28 11:42:11 crc kubenswrapper[4804]: I0128 11:42:11.070674 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" event={"ID":"82ef8b43-de59-45f8-9c2a-765c5709054b","Type":"ContainerStarted","Data":"1fe685e535efd281a9b4cf9713641d9161c23425d8abe0134248a2395c6b7208"} Jan 28 11:42:11 crc kubenswrapper[4804]: I0128 11:42:11.197378 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-7bd5b5bf44-5z4wx"] Jan 28 11:42:12 crc kubenswrapper[4804]: I0128 11:42:12.084851 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" event={"ID":"82ef8b43-de59-45f8-9c2a-765c5709054b","Type":"ContainerStarted","Data":"bcbdcf39ea5a39e34418c6ab9208339d9f7fde2eca3c37cbb5806710252cf88b"} Jan 28 11:42:12 crc kubenswrapper[4804]: I0128 11:42:12.106210 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" podStartSLOduration=2.783773209 podStartE2EDuration="6.106184234s" podCreationTimestamp="2026-01-28 11:42:06 +0000 UTC" firstStartedPulling="2026-01-28 11:42:07.307159392 +0000 UTC m=+1203.102039376" lastFinishedPulling="2026-01-28 11:42:10.629570417 +0000 UTC m=+1206.424450401" observedRunningTime="2026-01-28 11:42:12.099373897 +0000 UTC m=+1207.894253891" watchObservedRunningTime="2026-01-28 11:42:12.106184234 +0000 UTC m=+1207.901064218" Jan 28 11:42:15 crc kubenswrapper[4804]: I0128 11:42:15.107861 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8f675b957-rm9qp" event={"ID":"878daeff-34bf-4dab-8118-e42c318849bb","Type":"ContainerStarted","Data":"1144f29504fe6195fc342eb320a4b830871f3e9d0216c4ce9fc167121dce473e"} Jan 28 11:42:15 crc kubenswrapper[4804]: I0128 11:42:15.131850 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-8f675b957-rm9qp" podStartSLOduration=6.140046342 podStartE2EDuration="9.131831074s" podCreationTimestamp="2026-01-28 11:42:06 +0000 UTC" firstStartedPulling="2026-01-28 11:42:07.551357024 +0000 UTC m=+1203.346237008" lastFinishedPulling="2026-01-28 11:42:10.543141756 +0000 UTC m=+1206.338021740" observedRunningTime="2026-01-28 11:42:15.128801357 +0000 UTC m=+1210.923681341" watchObservedRunningTime="2026-01-28 11:42:15.131831074 +0000 UTC m=+1210.926711068" Jan 28 11:42:15 crc kubenswrapper[4804]: W0128 11:42:15.327812 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbb3c1e4d_637e_4de6_aa37_7daff5298b30.slice/crio-545d9d7c89cb4fb5f1b3a7bdef9c710109a6c3aca89e779fe23e0a1c510a7627 WatchSource:0}: Error finding container 545d9d7c89cb4fb5f1b3a7bdef9c710109a6c3aca89e779fe23e0a1c510a7627: Status 404 returned error can't find the container with id 545d9d7c89cb4fb5f1b3a7bdef9c710109a6c3aca89e779fe23e0a1c510a7627 Jan 28 11:42:16 crc kubenswrapper[4804]: I0128 11:42:16.117073 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" event={"ID":"bb3c1e4d-637e-4de6-aa37-7daff5298b30","Type":"ContainerStarted","Data":"545d9d7c89cb4fb5f1b3a7bdef9c710109a6c3aca89e779fe23e0a1c510a7627"} Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.017254 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.083703 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cthxz"] Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.083985 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" podUID="a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" containerName="dnsmasq-dns" containerID="cri-o://619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6" gracePeriod=10 Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.132848 4804 generic.go:334] "Generic (PLEG): container finished" podID="3bd4fedc-8940-48ad-b718-4fbb98e48bf0" containerID="dc599447325170297407d10ffc4cdfee6dcb5608ba938fdf91f777cfd7556821" exitCode=0 Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.132985 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2swjk" event={"ID":"3bd4fedc-8940-48ad-b718-4fbb98e48bf0","Type":"ContainerDied","Data":"dc599447325170297407d10ffc4cdfee6dcb5608ba938fdf91f777cfd7556821"} Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.139465 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"559981d5-7d2e-4624-a425-53ff3158840a","Type":"ContainerStarted","Data":"26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79"} Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.139695 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="ceilometer-central-agent" containerID="cri-o://353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346" gracePeriod=30 Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.140015 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.140086 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="proxy-httpd" containerID="cri-o://26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79" gracePeriod=30 Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.140162 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="sg-core" containerID="cri-o://cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111" gracePeriod=30 Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.140205 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="ceilometer-notification-agent" containerID="cri-o://4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7" gracePeriod=30 Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.156562 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" event={"ID":"bb3c1e4d-637e-4de6-aa37-7daff5298b30","Type":"ContainerStarted","Data":"8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e"} Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.156610 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" event={"ID":"bb3c1e4d-637e-4de6-aa37-7daff5298b30","Type":"ContainerStarted","Data":"55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389"} Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.157011 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.157165 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.191822 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.600494285 podStartE2EDuration="57.191795828s" podCreationTimestamp="2026-01-28 11:41:20 +0000 UTC" firstStartedPulling="2026-01-28 11:41:21.773526045 +0000 UTC m=+1157.568406029" lastFinishedPulling="2026-01-28 11:42:16.364827598 +0000 UTC m=+1212.159707572" observedRunningTime="2026-01-28 11:42:17.185314982 +0000 UTC m=+1212.980194976" watchObservedRunningTime="2026-01-28 11:42:17.191795828 +0000 UTC m=+1212.986675812" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.228354 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" podStartSLOduration=8.228328521 podStartE2EDuration="8.228328521s" podCreationTimestamp="2026-01-28 11:42:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:17.219210611 +0000 UTC m=+1213.014090595" watchObservedRunningTime="2026-01-28 11:42:17.228328521 +0000 UTC m=+1213.023208505" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.711109 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.844377 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-sb\") pod \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.844997 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-config\") pod \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.845167 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r99bj\" (UniqueName: \"kubernetes.io/projected/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-kube-api-access-r99bj\") pod \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.845812 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-swift-storage-0\") pod \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.846217 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-nb\") pod \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.846320 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-svc\") pod \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\" (UID: \"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4\") " Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.863103 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-kube-api-access-r99bj" (OuterVolumeSpecName: "kube-api-access-r99bj") pod "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" (UID: "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4"). InnerVolumeSpecName "kube-api-access-r99bj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.891006 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" (UID: "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.899350 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" (UID: "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.900254 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-config" (OuterVolumeSpecName: "config") pod "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" (UID: "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.905484 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" (UID: "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.906179 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" (UID: "a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.948713 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.948763 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r99bj\" (UniqueName: \"kubernetes.io/projected/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-kube-api-access-r99bj\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.948774 4804 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.948783 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.948792 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:17 crc kubenswrapper[4804]: I0128 11:42:17.948802 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.173081 4804 generic.go:334] "Generic (PLEG): container finished" podID="559981d5-7d2e-4624-a425-53ff3158840a" containerID="26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79" exitCode=0 Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.173111 4804 generic.go:334] "Generic (PLEG): container finished" podID="559981d5-7d2e-4624-a425-53ff3158840a" containerID="cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111" exitCode=2 Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.173119 4804 generic.go:334] "Generic (PLEG): container finished" podID="559981d5-7d2e-4624-a425-53ff3158840a" containerID="353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346" exitCode=0 Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.173178 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"559981d5-7d2e-4624-a425-53ff3158840a","Type":"ContainerDied","Data":"26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79"} Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.173240 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"559981d5-7d2e-4624-a425-53ff3158840a","Type":"ContainerDied","Data":"cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111"} Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.173273 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"559981d5-7d2e-4624-a425-53ff3158840a","Type":"ContainerDied","Data":"353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346"} Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.175713 4804 generic.go:334] "Generic (PLEG): container finished" podID="a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" containerID="619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6" exitCode=0 Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.175820 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.175874 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" event={"ID":"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4","Type":"ContainerDied","Data":"619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6"} Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.175942 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5ccc5c4795-cthxz" event={"ID":"a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4","Type":"ContainerDied","Data":"712240faee5573c11cecd4f774dbe7152151ce4aa8c358cabe00675975fd0077"} Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.175971 4804 scope.go:117] "RemoveContainer" containerID="619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.205069 4804 scope.go:117] "RemoveContainer" containerID="988bd674e871e03f6b5bd3343c9169f5546e6cb263a24399cbec20a5f0214e6e" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.215175 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cthxz"] Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.222687 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5ccc5c4795-cthxz"] Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.228169 4804 scope.go:117] "RemoveContainer" containerID="619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6" Jan 28 11:42:18 crc kubenswrapper[4804]: E0128 11:42:18.231517 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6\": container with ID starting with 619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6 not found: ID does not exist" containerID="619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.231569 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6"} err="failed to get container status \"619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6\": rpc error: code = NotFound desc = could not find container \"619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6\": container with ID starting with 619c83cdca705d448e2e5835eac55022fb285e6da2f5f03239f3f079382055a6 not found: ID does not exist" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.231603 4804 scope.go:117] "RemoveContainer" containerID="988bd674e871e03f6b5bd3343c9169f5546e6cb263a24399cbec20a5f0214e6e" Jan 28 11:42:18 crc kubenswrapper[4804]: E0128 11:42:18.233849 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"988bd674e871e03f6b5bd3343c9169f5546e6cb263a24399cbec20a5f0214e6e\": container with ID starting with 988bd674e871e03f6b5bd3343c9169f5546e6cb263a24399cbec20a5f0214e6e not found: ID does not exist" containerID="988bd674e871e03f6b5bd3343c9169f5546e6cb263a24399cbec20a5f0214e6e" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.233954 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"988bd674e871e03f6b5bd3343c9169f5546e6cb263a24399cbec20a5f0214e6e"} err="failed to get container status \"988bd674e871e03f6b5bd3343c9169f5546e6cb263a24399cbec20a5f0214e6e\": rpc error: code = NotFound desc = could not find container \"988bd674e871e03f6b5bd3343c9169f5546e6cb263a24399cbec20a5f0214e6e\": container with ID starting with 988bd674e871e03f6b5bd3343c9169f5546e6cb263a24399cbec20a5f0214e6e not found: ID does not exist" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.538910 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2swjk" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.659564 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-db-sync-config-data\") pod \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.659644 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5v9bm\" (UniqueName: \"kubernetes.io/projected/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-kube-api-access-5v9bm\") pod \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.659666 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-etc-machine-id\") pod \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.659747 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-config-data\") pod \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.659775 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-scripts\") pod \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.659843 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-combined-ca-bundle\") pod \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\" (UID: \"3bd4fedc-8940-48ad-b718-4fbb98e48bf0\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.660112 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "3bd4fedc-8940-48ad-b718-4fbb98e48bf0" (UID: "3bd4fedc-8940-48ad-b718-4fbb98e48bf0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.661478 4804 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.665916 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-scripts" (OuterVolumeSpecName: "scripts") pod "3bd4fedc-8940-48ad-b718-4fbb98e48bf0" (UID: "3bd4fedc-8940-48ad-b718-4fbb98e48bf0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.665965 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-kube-api-access-5v9bm" (OuterVolumeSpecName: "kube-api-access-5v9bm") pod "3bd4fedc-8940-48ad-b718-4fbb98e48bf0" (UID: "3bd4fedc-8940-48ad-b718-4fbb98e48bf0"). InnerVolumeSpecName "kube-api-access-5v9bm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.685258 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "3bd4fedc-8940-48ad-b718-4fbb98e48bf0" (UID: "3bd4fedc-8940-48ad-b718-4fbb98e48bf0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.699857 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bd4fedc-8940-48ad-b718-4fbb98e48bf0" (UID: "3bd4fedc-8940-48ad-b718-4fbb98e48bf0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.734006 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.743437 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-config-data" (OuterVolumeSpecName: "config-data") pod "3bd4fedc-8940-48ad-b718-4fbb98e48bf0" (UID: "3bd4fedc-8940-48ad-b718-4fbb98e48bf0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.766213 4804 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.766267 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5v9bm\" (UniqueName: \"kubernetes.io/projected/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-kube-api-access-5v9bm\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.766290 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.766320 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.766340 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bd4fedc-8940-48ad-b718-4fbb98e48bf0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.867273 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-44xv5\" (UniqueName: \"kubernetes.io/projected/559981d5-7d2e-4624-a425-53ff3158840a-kube-api-access-44xv5\") pod \"559981d5-7d2e-4624-a425-53ff3158840a\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.867365 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-run-httpd\") pod \"559981d5-7d2e-4624-a425-53ff3158840a\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.867466 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-config-data\") pod \"559981d5-7d2e-4624-a425-53ff3158840a\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.867540 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-scripts\") pod \"559981d5-7d2e-4624-a425-53ff3158840a\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.867574 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-log-httpd\") pod \"559981d5-7d2e-4624-a425-53ff3158840a\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.867596 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-combined-ca-bundle\") pod \"559981d5-7d2e-4624-a425-53ff3158840a\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.867628 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-sg-core-conf-yaml\") pod \"559981d5-7d2e-4624-a425-53ff3158840a\" (UID: \"559981d5-7d2e-4624-a425-53ff3158840a\") " Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.868712 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "559981d5-7d2e-4624-a425-53ff3158840a" (UID: "559981d5-7d2e-4624-a425-53ff3158840a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.869256 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "559981d5-7d2e-4624-a425-53ff3158840a" (UID: "559981d5-7d2e-4624-a425-53ff3158840a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.871802 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-scripts" (OuterVolumeSpecName: "scripts") pod "559981d5-7d2e-4624-a425-53ff3158840a" (UID: "559981d5-7d2e-4624-a425-53ff3158840a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.872436 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/559981d5-7d2e-4624-a425-53ff3158840a-kube-api-access-44xv5" (OuterVolumeSpecName: "kube-api-access-44xv5") pod "559981d5-7d2e-4624-a425-53ff3158840a" (UID: "559981d5-7d2e-4624-a425-53ff3158840a"). InnerVolumeSpecName "kube-api-access-44xv5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.896265 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "559981d5-7d2e-4624-a425-53ff3158840a" (UID: "559981d5-7d2e-4624-a425-53ff3158840a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.924042 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" path="/var/lib/kubelet/pods/a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4/volumes" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.932546 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.942224 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "559981d5-7d2e-4624-a425-53ff3158840a" (UID: "559981d5-7d2e-4624-a425-53ff3158840a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.969384 4804 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.969409 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.969419 4804 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.969436 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-44xv5\" (UniqueName: \"kubernetes.io/projected/559981d5-7d2e-4624-a425-53ff3158840a-kube-api-access-44xv5\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.969445 4804 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/559981d5-7d2e-4624-a425-53ff3158840a-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.969454 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.980404 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-config-data" (OuterVolumeSpecName: "config-data") pod "559981d5-7d2e-4624-a425-53ff3158840a" (UID: "559981d5-7d2e-4624-a425-53ff3158840a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:18 crc kubenswrapper[4804]: I0128 11:42:18.998852 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.070842 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/559981d5-7d2e-4624-a425-53ff3158840a-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.198043 4804 generic.go:334] "Generic (PLEG): container finished" podID="559981d5-7d2e-4624-a425-53ff3158840a" containerID="4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7" exitCode=0 Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.198919 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"559981d5-7d2e-4624-a425-53ff3158840a","Type":"ContainerDied","Data":"4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7"} Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.198960 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"559981d5-7d2e-4624-a425-53ff3158840a","Type":"ContainerDied","Data":"fa6e1a12eec8f670dacaf476eeccb44cad0c7ce79723abf8463004426598a522"} Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.199006 4804 scope.go:117] "RemoveContainer" containerID="26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.199395 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.202227 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-2swjk" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.202286 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-2swjk" event={"ID":"3bd4fedc-8940-48ad-b718-4fbb98e48bf0","Type":"ContainerDied","Data":"cd5a1fb1b75f267a6c5725321d259dcf2acd5836e7aa0491855baf75e38ef9de"} Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.202320 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd5a1fb1b75f267a6c5725321d259dcf2acd5836e7aa0491855baf75e38ef9de" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.223147 4804 scope.go:117] "RemoveContainer" containerID="cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.267452 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.283549 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.296511 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:19 crc kubenswrapper[4804]: E0128 11:42:19.296897 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="ceilometer-notification-agent" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.296913 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="ceilometer-notification-agent" Jan 28 11:42:19 crc kubenswrapper[4804]: E0128 11:42:19.296931 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bd4fedc-8940-48ad-b718-4fbb98e48bf0" containerName="cinder-db-sync" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.296938 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bd4fedc-8940-48ad-b718-4fbb98e48bf0" containerName="cinder-db-sync" Jan 28 11:42:19 crc kubenswrapper[4804]: E0128 11:42:19.296951 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" containerName="dnsmasq-dns" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.296956 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" containerName="dnsmasq-dns" Jan 28 11:42:19 crc kubenswrapper[4804]: E0128 11:42:19.296967 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="sg-core" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.296972 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="sg-core" Jan 28 11:42:19 crc kubenswrapper[4804]: E0128 11:42:19.296989 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="ceilometer-central-agent" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.296994 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="ceilometer-central-agent" Jan 28 11:42:19 crc kubenswrapper[4804]: E0128 11:42:19.297012 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" containerName="init" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.297018 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" containerName="init" Jan 28 11:42:19 crc kubenswrapper[4804]: E0128 11:42:19.297037 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="proxy-httpd" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.297044 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="proxy-httpd" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.297201 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="sg-core" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.297211 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1c0b69d-65ba-4cfd-b7d5-b842e64eafb4" containerName="dnsmasq-dns" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.297226 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="ceilometer-notification-agent" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.297239 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="ceilometer-central-agent" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.297249 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bd4fedc-8940-48ad-b718-4fbb98e48bf0" containerName="cinder-db-sync" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.297259 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="559981d5-7d2e-4624-a425-53ff3158840a" containerName="proxy-httpd" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.298803 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.304146 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.304317 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.306134 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.345307 4804 scope.go:117] "RemoveContainer" containerID="4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.377415 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-config-data\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.377513 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.377555 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.377590 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-scripts\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.377624 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-log-httpd\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.377641 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88vrp\" (UniqueName: \"kubernetes.io/projected/bf5a35f4-0777-4b67-978a-ce8ab97000d4-kube-api-access-88vrp\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.377671 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-run-httpd\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.411739 4804 scope.go:117] "RemoveContainer" containerID="353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.475122 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.476549 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.479553 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.479608 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.479640 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-scripts\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.479678 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-log-httpd\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.479696 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88vrp\" (UniqueName: \"kubernetes.io/projected/bf5a35f4-0777-4b67-978a-ce8ab97000d4-kube-api-access-88vrp\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.479728 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-run-httpd\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.479781 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-config-data\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.481505 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-log-httpd\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.481643 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-run-httpd\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.484023 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.484165 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.484248 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-p4q8k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.484320 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.489061 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-scripts\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.489984 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-config-data\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.501968 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.536061 4804 scope.go:117] "RemoveContainer" containerID="26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79" Jan 28 11:42:19 crc kubenswrapper[4804]: E0128 11:42:19.544025 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79\": container with ID starting with 26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79 not found: ID does not exist" containerID="26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.544069 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79"} err="failed to get container status \"26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79\": rpc error: code = NotFound desc = could not find container \"26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79\": container with ID starting with 26a74adb262dc3c8ca4393b6ae149017392def7671c47005759d8d72a7ff7c79 not found: ID does not exist" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.544095 4804 scope.go:117] "RemoveContainer" containerID="cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111" Jan 28 11:42:19 crc kubenswrapper[4804]: E0128 11:42:19.544952 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111\": container with ID starting with cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111 not found: ID does not exist" containerID="cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.544971 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111"} err="failed to get container status \"cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111\": rpc error: code = NotFound desc = could not find container \"cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111\": container with ID starting with cebd477b49d847ca9cff35646113bfeb4ff07645d70ccc4f1dc939b96b094111 not found: ID does not exist" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.544986 4804 scope.go:117] "RemoveContainer" containerID="4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7" Jan 28 11:42:19 crc kubenswrapper[4804]: E0128 11:42:19.548999 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7\": container with ID starting with 4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7 not found: ID does not exist" containerID="4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.549043 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7"} err="failed to get container status \"4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7\": rpc error: code = NotFound desc = could not find container \"4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7\": container with ID starting with 4e387d4e5e94bfcfc4898c6d0bb0bd93dfa16c7d5baf03ce5b6f056af718a3b7 not found: ID does not exist" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.549077 4804 scope.go:117] "RemoveContainer" containerID="353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346" Jan 28 11:42:19 crc kubenswrapper[4804]: E0128 11:42:19.572014 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346\": container with ID starting with 353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346 not found: ID does not exist" containerID="353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.572080 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346"} err="failed to get container status \"353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346\": rpc error: code = NotFound desc = could not find container \"353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346\": container with ID starting with 353c873423d84cd2b720ea196d8548b22d2fd0ede14c480ad6257cf35a366346 not found: ID does not exist" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.589680 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.590568 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.590845 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.591100 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-scripts\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.591226 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e213f7b0-f3b8-45f6-b965-ed909114500f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.591250 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.591376 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckd7l\" (UniqueName: \"kubernetes.io/projected/e213f7b0-f3b8-45f6-b965-ed909114500f-kube-api-access-ckd7l\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.591424 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.609642 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88vrp\" (UniqueName: \"kubernetes.io/projected/bf5a35f4-0777-4b67-978a-ce8ab97000d4-kube-api-access-88vrp\") pod \"ceilometer-0\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.661058 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-kzz4k"] Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.663056 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.694715 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckd7l\" (UniqueName: \"kubernetes.io/projected/e213f7b0-f3b8-45f6-b965-ed909114500f-kube-api-access-ckd7l\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.694783 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.694839 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.694945 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-scripts\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.694996 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e213f7b0-f3b8-45f6-b965-ed909114500f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.695015 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.696988 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e213f7b0-f3b8-45f6-b965-ed909114500f-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.697560 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.705307 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-kzz4k"] Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.706460 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-scripts\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.735130 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.736542 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.736977 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.748520 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckd7l\" (UniqueName: \"kubernetes.io/projected/e213f7b0-f3b8-45f6-b965-ed909114500f-kube-api-access-ckd7l\") pod \"cinder-scheduler-0\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.801010 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-config\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.801065 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.801087 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z7dm\" (UniqueName: \"kubernetes.io/projected/2b276638-3e05-4295-825f-321552970394-kube-api-access-8z7dm\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.801122 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.801147 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.801170 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.906264 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-config\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.906318 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.906339 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z7dm\" (UniqueName: \"kubernetes.io/projected/2b276638-3e05-4295-825f-321552970394-kube-api-access-8z7dm\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.906373 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.906400 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.906421 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.907492 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-svc\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.908431 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-config\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.908954 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.909482 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.909766 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-swift-storage-0\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.959865 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.961655 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.963450 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z7dm\" (UniqueName: \"kubernetes.io/projected/2b276638-3e05-4295-825f-321552970394-kube-api-access-8z7dm\") pod \"dnsmasq-dns-6bb4fc677f-kzz4k\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:19 crc kubenswrapper[4804]: I0128 11:42:19.985334 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:19.993791 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:19.998422 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.081719 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.110230 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc9adb0b-6921-40d7-b50f-abc26763eaf5-logs\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.110292 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.110353 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data-custom\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.110425 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc9adb0b-6921-40d7-b50f-abc26763eaf5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.110450 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.110502 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dngx2\" (UniqueName: \"kubernetes.io/projected/fc9adb0b-6921-40d7-b50f-abc26763eaf5-kube-api-access-dngx2\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.110588 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-scripts\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.215903 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data-custom\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.215966 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc9adb0b-6921-40d7-b50f-abc26763eaf5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.215992 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.216008 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dngx2\" (UniqueName: \"kubernetes.io/projected/fc9adb0b-6921-40d7-b50f-abc26763eaf5-kube-api-access-dngx2\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.216029 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-scripts\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.216098 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc9adb0b-6921-40d7-b50f-abc26763eaf5-logs\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.216123 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.216766 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc9adb0b-6921-40d7-b50f-abc26763eaf5-etc-machine-id\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.219470 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.220038 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.220299 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc9adb0b-6921-40d7-b50f-abc26763eaf5-logs\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.220698 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data-custom\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.222092 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-scripts\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.242443 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dngx2\" (UniqueName: \"kubernetes.io/projected/fc9adb0b-6921-40d7-b50f-abc26763eaf5-kube-api-access-dngx2\") pod \"cinder-api-0\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.333348 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.365483 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.713663 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.749193 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.751690 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-kzz4k"] Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.944539 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="559981d5-7d2e-4624-a425-53ff3158840a" path="/var/lib/kubelet/pods/559981d5-7d2e-4624-a425-53ff3158840a/volumes" Jan 28 11:42:20 crc kubenswrapper[4804]: I0128 11:42:20.946079 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.015779 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6b8bbc97bf-dkp56"] Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.016045 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6b8bbc97bf-dkp56" podUID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerName="neutron-api" containerID="cri-o://bf7528745919414ab5c0c5536eb5b3fc9885458114b18b78f9462eb6cff21f37" gracePeriod=30 Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.016642 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-6b8bbc97bf-dkp56" podUID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerName="neutron-httpd" containerID="cri-o://a77115c93ac5035e08ec037be345ec8297ca1f73ac611f0d1dfc69f51b156d7c" gracePeriod=30 Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.050491 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-7d88fd9b89-w66bx"] Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.051803 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.082758 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6b8bbc97bf-dkp56" podUID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.154:9696/\": EOF" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.123804 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d88fd9b89-w66bx"] Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.231107 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5a35f4-0777-4b67-978a-ce8ab97000d4","Type":"ContainerStarted","Data":"8eea3d4a522a6cf3f69074fc2cae25b852205b216d0f0630ee0a40145a648a1d"} Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.238012 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e213f7b0-f3b8-45f6-b965-ed909114500f","Type":"ContainerStarted","Data":"8cc18ae7a5ea3851e2236f7340657d254b4490ff0fc9f65580ef195204b81856"} Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.246967 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fc9adb0b-6921-40d7-b50f-abc26763eaf5","Type":"ContainerStarted","Data":"5bf6ffe97daa495d639b23fe05e4c1895ce6b4f63d483ae138313a43d26164eb"} Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.247024 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-httpd-config\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.247053 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-ovndb-tls-certs\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.247116 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9c5k\" (UniqueName: \"kubernetes.io/projected/095bc753-88c4-456c-a3ae-aa0040a76338-kube-api-access-q9c5k\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.247152 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-combined-ca-bundle\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.247187 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-public-tls-certs\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.247206 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-internal-tls-certs\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.247238 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-config\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.255376 4804 generic.go:334] "Generic (PLEG): container finished" podID="2b276638-3e05-4295-825f-321552970394" containerID="7d55e8f0ae30cf6b17f9255210f13d604f097d0227761c71497f25b925dfda5d" exitCode=0 Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.255413 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" event={"ID":"2b276638-3e05-4295-825f-321552970394","Type":"ContainerDied","Data":"7d55e8f0ae30cf6b17f9255210f13d604f097d0227761c71497f25b925dfda5d"} Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.255436 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" event={"ID":"2b276638-3e05-4295-825f-321552970394","Type":"ContainerStarted","Data":"5ac546ee98d5d28f78181c3225f300b9da32c9a6f7eeb78daa5bbc95aceb3b8d"} Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.349278 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-public-tls-certs\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.349331 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-internal-tls-certs\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.349377 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-config\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.349433 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-httpd-config\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.349459 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-ovndb-tls-certs\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.349540 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9c5k\" (UniqueName: \"kubernetes.io/projected/095bc753-88c4-456c-a3ae-aa0040a76338-kube-api-access-q9c5k\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.349587 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-combined-ca-bundle\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.354958 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-combined-ca-bundle\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.358290 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-public-tls-certs\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.358439 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-internal-tls-certs\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.359084 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-config\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.365176 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-httpd-config\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.370474 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9c5k\" (UniqueName: \"kubernetes.io/projected/095bc753-88c4-456c-a3ae-aa0040a76338-kube-api-access-q9c5k\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.382797 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-ovndb-tls-certs\") pod \"neutron-7d88fd9b89-w66bx\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:21 crc kubenswrapper[4804]: I0128 11:42:21.431603 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:22 crc kubenswrapper[4804]: I0128 11:42:22.268761 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5a35f4-0777-4b67-978a-ce8ab97000d4","Type":"ContainerStarted","Data":"d53b727b65344f9dcbac7c3c08faebc9e0b148d77c02a88ad5ad99d02b9e26a0"} Jan 28 11:42:22 crc kubenswrapper[4804]: I0128 11:42:22.282758 4804 generic.go:334] "Generic (PLEG): container finished" podID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerID="a77115c93ac5035e08ec037be345ec8297ca1f73ac611f0d1dfc69f51b156d7c" exitCode=0 Jan 28 11:42:22 crc kubenswrapper[4804]: I0128 11:42:22.282831 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8bbc97bf-dkp56" event={"ID":"1f4e070e-7b0f-4a60-9383-7e1a61380fc6","Type":"ContainerDied","Data":"a77115c93ac5035e08ec037be345ec8297ca1f73ac611f0d1dfc69f51b156d7c"} Jan 28 11:42:22 crc kubenswrapper[4804]: I0128 11:42:22.308131 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fc9adb0b-6921-40d7-b50f-abc26763eaf5","Type":"ContainerStarted","Data":"5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35"} Jan 28 11:42:22 crc kubenswrapper[4804]: I0128 11:42:22.332169 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" event={"ID":"2b276638-3e05-4295-825f-321552970394","Type":"ContainerStarted","Data":"109925f98b98bd19bc310a0910c394cca6331ef46f59894bad1048ee96f57b9e"} Jan 28 11:42:22 crc kubenswrapper[4804]: I0128 11:42:22.332644 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:22 crc kubenswrapper[4804]: I0128 11:42:22.376063 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" podStartSLOduration=3.376043912 podStartE2EDuration="3.376043912s" podCreationTimestamp="2026-01-28 11:42:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:22.373218422 +0000 UTC m=+1218.168098406" watchObservedRunningTime="2026-01-28 11:42:22.376043912 +0000 UTC m=+1218.170923896" Jan 28 11:42:22 crc kubenswrapper[4804]: I0128 11:42:22.434486 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-7d88fd9b89-w66bx"] Jan 28 11:42:22 crc kubenswrapper[4804]: W0128 11:42:22.478011 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod095bc753_88c4_456c_a3ae_aa0040a76338.slice/crio-d66804f71c7164aff0af828551ca8929bfd4e365e7c25ea56443ca4b0d53463e WatchSource:0}: Error finding container d66804f71c7164aff0af828551ca8929bfd4e365e7c25ea56443ca4b0d53463e: Status 404 returned error can't find the container with id d66804f71c7164aff0af828551ca8929bfd4e365e7c25ea56443ca4b0d53463e Jan 28 11:42:23 crc kubenswrapper[4804]: I0128 11:42:23.188357 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:42:23 crc kubenswrapper[4804]: I0128 11:42:23.341380 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fc9adb0b-6921-40d7-b50f-abc26763eaf5","Type":"ContainerStarted","Data":"a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9"} Jan 28 11:42:23 crc kubenswrapper[4804]: I0128 11:42:23.341519 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 28 11:42:23 crc kubenswrapper[4804]: I0128 11:42:23.343657 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5a35f4-0777-4b67-978a-ce8ab97000d4","Type":"ContainerStarted","Data":"2868fa37a5170a1df9710a1b7d3fc5cd16c8581466e759928b2c3aac1728ebdc"} Jan 28 11:42:23 crc kubenswrapper[4804]: I0128 11:42:23.345931 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d88fd9b89-w66bx" event={"ID":"095bc753-88c4-456c-a3ae-aa0040a76338","Type":"ContainerStarted","Data":"5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe"} Jan 28 11:42:23 crc kubenswrapper[4804]: I0128 11:42:23.345969 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d88fd9b89-w66bx" event={"ID":"095bc753-88c4-456c-a3ae-aa0040a76338","Type":"ContainerStarted","Data":"d66804f71c7164aff0af828551ca8929bfd4e365e7c25ea56443ca4b0d53463e"} Jan 28 11:42:23 crc kubenswrapper[4804]: I0128 11:42:23.363038 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.363022785 podStartE2EDuration="4.363022785s" podCreationTimestamp="2026-01-28 11:42:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:23.360368161 +0000 UTC m=+1219.155248155" watchObservedRunningTime="2026-01-28 11:42:23.363022785 +0000 UTC m=+1219.157902769" Jan 28 11:42:23 crc kubenswrapper[4804]: I0128 11:42:23.578308 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-6b8bbc97bf-dkp56" podUID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.154:9696/\": dial tcp 10.217.0.154:9696: connect: connection refused" Jan 28 11:42:24 crc kubenswrapper[4804]: I0128 11:42:24.354942 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e213f7b0-f3b8-45f6-b965-ed909114500f","Type":"ContainerStarted","Data":"6e63076fc85ddedcafa8e57fc77c689aa3dd692341cb3545cffb8e8c36341088"} Jan 28 11:42:24 crc kubenswrapper[4804]: I0128 11:42:24.355307 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e213f7b0-f3b8-45f6-b965-ed909114500f","Type":"ContainerStarted","Data":"4a4325a7f87ad4e4d3dfec8dfb9484dd1be4c16f01570f585a783d57a6a9b20e"} Jan 28 11:42:24 crc kubenswrapper[4804]: I0128 11:42:24.356428 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d88fd9b89-w66bx" event={"ID":"095bc753-88c4-456c-a3ae-aa0040a76338","Type":"ContainerStarted","Data":"789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f"} Jan 28 11:42:24 crc kubenswrapper[4804]: I0128 11:42:24.356570 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:24 crc kubenswrapper[4804]: I0128 11:42:24.358289 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="fc9adb0b-6921-40d7-b50f-abc26763eaf5" containerName="cinder-api-log" containerID="cri-o://5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35" gracePeriod=30 Jan 28 11:42:24 crc kubenswrapper[4804]: I0128 11:42:24.358512 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5a35f4-0777-4b67-978a-ce8ab97000d4","Type":"ContainerStarted","Data":"d9a024557230838bdc17acdb5049c5e8f2ec2a08a39b86af7c3f3e3797853125"} Jan 28 11:42:24 crc kubenswrapper[4804]: I0128 11:42:24.358565 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="fc9adb0b-6921-40d7-b50f-abc26763eaf5" containerName="cinder-api" containerID="cri-o://a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9" gracePeriod=30 Jan 28 11:42:24 crc kubenswrapper[4804]: I0128 11:42:24.399113 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.566458553 podStartE2EDuration="5.399087611s" podCreationTimestamp="2026-01-28 11:42:19 +0000 UTC" firstStartedPulling="2026-01-28 11:42:20.726672076 +0000 UTC m=+1216.521552060" lastFinishedPulling="2026-01-28 11:42:22.559301134 +0000 UTC m=+1218.354181118" observedRunningTime="2026-01-28 11:42:24.388905307 +0000 UTC m=+1220.183785301" watchObservedRunningTime="2026-01-28 11:42:24.399087611 +0000 UTC m=+1220.193967595" Jan 28 11:42:24 crc kubenswrapper[4804]: I0128 11:42:24.414569 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-7d88fd9b89-w66bx" podStartSLOduration=3.414543542 podStartE2EDuration="3.414543542s" podCreationTimestamp="2026-01-28 11:42:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:24.408850371 +0000 UTC m=+1220.203730355" watchObservedRunningTime="2026-01-28 11:42:24.414543542 +0000 UTC m=+1220.209423526" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:24.999228 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.244541 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.369857 4804 generic.go:334] "Generic (PLEG): container finished" podID="fc9adb0b-6921-40d7-b50f-abc26763eaf5" containerID="a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9" exitCode=0 Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.369926 4804 generic.go:334] "Generic (PLEG): container finished" podID="fc9adb0b-6921-40d7-b50f-abc26763eaf5" containerID="5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35" exitCode=143 Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.371551 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.372136 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fc9adb0b-6921-40d7-b50f-abc26763eaf5","Type":"ContainerDied","Data":"a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9"} Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.372166 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fc9adb0b-6921-40d7-b50f-abc26763eaf5","Type":"ContainerDied","Data":"5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35"} Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.372181 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"fc9adb0b-6921-40d7-b50f-abc26763eaf5","Type":"ContainerDied","Data":"5bf6ffe97daa495d639b23fe05e4c1895ce6b4f63d483ae138313a43d26164eb"} Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.372216 4804 scope.go:117] "RemoveContainer" containerID="a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.385150 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-combined-ca-bundle\") pod \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.385229 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data\") pod \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.385273 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-scripts\") pod \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.385325 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dngx2\" (UniqueName: \"kubernetes.io/projected/fc9adb0b-6921-40d7-b50f-abc26763eaf5-kube-api-access-dngx2\") pod \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.385359 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc9adb0b-6921-40d7-b50f-abc26763eaf5-etc-machine-id\") pod \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.385646 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc9adb0b-6921-40d7-b50f-abc26763eaf5-logs\") pod \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.385765 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data-custom\") pod \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\" (UID: \"fc9adb0b-6921-40d7-b50f-abc26763eaf5\") " Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.386971 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fc9adb0b-6921-40d7-b50f-abc26763eaf5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "fc9adb0b-6921-40d7-b50f-abc26763eaf5" (UID: "fc9adb0b-6921-40d7-b50f-abc26763eaf5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.387252 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc9adb0b-6921-40d7-b50f-abc26763eaf5-logs" (OuterVolumeSpecName: "logs") pod "fc9adb0b-6921-40d7-b50f-abc26763eaf5" (UID: "fc9adb0b-6921-40d7-b50f-abc26763eaf5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.395030 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "fc9adb0b-6921-40d7-b50f-abc26763eaf5" (UID: "fc9adb0b-6921-40d7-b50f-abc26763eaf5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.412334 4804 scope.go:117] "RemoveContainer" containerID="5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.416061 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc9adb0b-6921-40d7-b50f-abc26763eaf5-kube-api-access-dngx2" (OuterVolumeSpecName: "kube-api-access-dngx2") pod "fc9adb0b-6921-40d7-b50f-abc26763eaf5" (UID: "fc9adb0b-6921-40d7-b50f-abc26763eaf5"). InnerVolumeSpecName "kube-api-access-dngx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.416086 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-scripts" (OuterVolumeSpecName: "scripts") pod "fc9adb0b-6921-40d7-b50f-abc26763eaf5" (UID: "fc9adb0b-6921-40d7-b50f-abc26763eaf5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.422009 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc9adb0b-6921-40d7-b50f-abc26763eaf5" (UID: "fc9adb0b-6921-40d7-b50f-abc26763eaf5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.466422 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data" (OuterVolumeSpecName: "config-data") pod "fc9adb0b-6921-40d7-b50f-abc26763eaf5" (UID: "fc9adb0b-6921-40d7-b50f-abc26763eaf5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.488308 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.488350 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.488378 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.488395 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fc9adb0b-6921-40d7-b50f-abc26763eaf5-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.488407 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dngx2\" (UniqueName: \"kubernetes.io/projected/fc9adb0b-6921-40d7-b50f-abc26763eaf5-kube-api-access-dngx2\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.488418 4804 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/fc9adb0b-6921-40d7-b50f-abc26763eaf5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.488427 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc9adb0b-6921-40d7-b50f-abc26763eaf5-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.594107 4804 scope.go:117] "RemoveContainer" containerID="a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9" Jan 28 11:42:25 crc kubenswrapper[4804]: E0128 11:42:25.600647 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9\": container with ID starting with a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9 not found: ID does not exist" containerID="a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.600692 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9"} err="failed to get container status \"a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9\": rpc error: code = NotFound desc = could not find container \"a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9\": container with ID starting with a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9 not found: ID does not exist" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.600721 4804 scope.go:117] "RemoveContainer" containerID="5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35" Jan 28 11:42:25 crc kubenswrapper[4804]: E0128 11:42:25.601650 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35\": container with ID starting with 5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35 not found: ID does not exist" containerID="5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.601716 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35"} err="failed to get container status \"5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35\": rpc error: code = NotFound desc = could not find container \"5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35\": container with ID starting with 5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35 not found: ID does not exist" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.601767 4804 scope.go:117] "RemoveContainer" containerID="a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.602287 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9"} err="failed to get container status \"a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9\": rpc error: code = NotFound desc = could not find container \"a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9\": container with ID starting with a72d0238d1ca939df38a486c9b987302b00ce3aa35d630d79f9564c9e9879ee9 not found: ID does not exist" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.602311 4804 scope.go:117] "RemoveContainer" containerID="5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.604778 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35"} err="failed to get container status \"5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35\": rpc error: code = NotFound desc = could not find container \"5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35\": container with ID starting with 5617b5e82c825720e62c925abdab49a2eacfc9f59796a4240bbaf775219fbc35 not found: ID does not exist" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.713334 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.730897 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.751566 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:42:25 crc kubenswrapper[4804]: E0128 11:42:25.752056 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc9adb0b-6921-40d7-b50f-abc26763eaf5" containerName="cinder-api-log" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.752075 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9adb0b-6921-40d7-b50f-abc26763eaf5" containerName="cinder-api-log" Jan 28 11:42:25 crc kubenswrapper[4804]: E0128 11:42:25.752100 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc9adb0b-6921-40d7-b50f-abc26763eaf5" containerName="cinder-api" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.752107 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc9adb0b-6921-40d7-b50f-abc26763eaf5" containerName="cinder-api" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.752290 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc9adb0b-6921-40d7-b50f-abc26763eaf5" containerName="cinder-api" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.752316 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc9adb0b-6921-40d7-b50f-abc26763eaf5" containerName="cinder-api-log" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.753277 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.755846 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.755998 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.756540 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.764227 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.899362 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-scripts\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.899443 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.899479 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-public-tls-certs\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.899590 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.899616 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04cc886c-66ef-4b91-87cf-1f9fe5de8081-etc-machine-id\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.899665 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8jjj\" (UniqueName: \"kubernetes.io/projected/04cc886c-66ef-4b91-87cf-1f9fe5de8081-kube-api-access-k8jjj\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.899762 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.899810 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04cc886c-66ef-4b91-87cf-1f9fe5de8081-logs\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:25 crc kubenswrapper[4804]: I0128 11:42:25.899839 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data-custom\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.001443 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.001492 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04cc886c-66ef-4b91-87cf-1f9fe5de8081-logs\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.001524 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data-custom\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.001551 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-scripts\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.001578 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.001607 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-public-tls-certs\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.001655 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.001680 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04cc886c-66ef-4b91-87cf-1f9fe5de8081-etc-machine-id\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.001695 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8jjj\" (UniqueName: \"kubernetes.io/projected/04cc886c-66ef-4b91-87cf-1f9fe5de8081-kube-api-access-k8jjj\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.003764 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04cc886c-66ef-4b91-87cf-1f9fe5de8081-etc-machine-id\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.003839 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04cc886c-66ef-4b91-87cf-1f9fe5de8081-logs\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.014380 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-public-tls-certs\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.015755 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.015897 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data-custom\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.016375 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-scripts\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.020391 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.025661 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8jjj\" (UniqueName: \"kubernetes.io/projected/04cc886c-66ef-4b91-87cf-1f9fe5de8081-kube-api-access-k8jjj\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.029145 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data\") pod \"cinder-api-0\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.081605 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.824549 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.929548 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc9adb0b-6921-40d7-b50f-abc26763eaf5" path="/var/lib/kubelet/pods/fc9adb0b-6921-40d7-b50f-abc26763eaf5/volumes" Jan 28 11:42:26 crc kubenswrapper[4804]: I0128 11:42:26.997253 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:42:27 crc kubenswrapper[4804]: I0128 11:42:27.071626 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-58c46c5cc8-bpsgv"] Jan 28 11:42:27 crc kubenswrapper[4804]: I0128 11:42:27.078229 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-58c46c5cc8-bpsgv" podUID="96c46652-7506-4118-a507-a5f2b6668c78" containerName="barbican-api" containerID="cri-o://8d2ae7c993c91ad0905eca59b162f3cd21ce7469700e79350f3b9f5b4056fdb6" gracePeriod=30 Jan 28 11:42:27 crc kubenswrapper[4804]: I0128 11:42:27.078627 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-58c46c5cc8-bpsgv" podUID="96c46652-7506-4118-a507-a5f2b6668c78" containerName="barbican-api-log" containerID="cri-o://c025f7a64dc8e48661b9d0d9b892d691c7104b5918c48e37213587b764f98ac2" gracePeriod=30 Jan 28 11:42:27 crc kubenswrapper[4804]: I0128 11:42:27.134312 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:42:27 crc kubenswrapper[4804]: I0128 11:42:27.397653 4804 generic.go:334] "Generic (PLEG): container finished" podID="96c46652-7506-4118-a507-a5f2b6668c78" containerID="c025f7a64dc8e48661b9d0d9b892d691c7104b5918c48e37213587b764f98ac2" exitCode=143 Jan 28 11:42:27 crc kubenswrapper[4804]: I0128 11:42:27.397927 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58c46c5cc8-bpsgv" event={"ID":"96c46652-7506-4118-a507-a5f2b6668c78","Type":"ContainerDied","Data":"c025f7a64dc8e48661b9d0d9b892d691c7104b5918c48e37213587b764f98ac2"} Jan 28 11:42:27 crc kubenswrapper[4804]: I0128 11:42:27.401219 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5a35f4-0777-4b67-978a-ce8ab97000d4","Type":"ContainerStarted","Data":"60911acc36e6a6a2e4395ed6f280d91b95f88a16dcea51245f1032e5c6e5c6d8"} Jan 28 11:42:27 crc kubenswrapper[4804]: I0128 11:42:27.401452 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 11:42:27 crc kubenswrapper[4804]: I0128 11:42:27.404645 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"04cc886c-66ef-4b91-87cf-1f9fe5de8081","Type":"ContainerStarted","Data":"6b06f838e59a73b485a69b93f766b0fb460afb06549c4aa004f7bac68fc724cc"} Jan 28 11:42:27 crc kubenswrapper[4804]: I0128 11:42:27.423550 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.209883187 podStartE2EDuration="8.423524223s" podCreationTimestamp="2026-01-28 11:42:19 +0000 UTC" firstStartedPulling="2026-01-28 11:42:20.390020171 +0000 UTC m=+1216.184900145" lastFinishedPulling="2026-01-28 11:42:26.603661197 +0000 UTC m=+1222.398541181" observedRunningTime="2026-01-28 11:42:27.421151936 +0000 UTC m=+1223.216031920" watchObservedRunningTime="2026-01-28 11:42:27.423524223 +0000 UTC m=+1223.218404207" Jan 28 11:42:28 crc kubenswrapper[4804]: I0128 11:42:28.415188 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"04cc886c-66ef-4b91-87cf-1f9fe5de8081","Type":"ContainerStarted","Data":"b7a5a299ea638aff1b67f737be31752dbc58e62ca2663d443117c669ac5e859a"} Jan 28 11:42:29 crc kubenswrapper[4804]: I0128 11:42:29.425528 4804 generic.go:334] "Generic (PLEG): container finished" podID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerID="bf7528745919414ab5c0c5536eb5b3fc9885458114b18b78f9462eb6cff21f37" exitCode=0 Jan 28 11:42:29 crc kubenswrapper[4804]: I0128 11:42:29.425625 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8bbc97bf-dkp56" event={"ID":"1f4e070e-7b0f-4a60-9383-7e1a61380fc6","Type":"ContainerDied","Data":"bf7528745919414ab5c0c5536eb5b3fc9885458114b18b78f9462eb6cff21f37"} Jan 28 11:42:29 crc kubenswrapper[4804]: I0128 11:42:29.428179 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"04cc886c-66ef-4b91-87cf-1f9fe5de8081","Type":"ContainerStarted","Data":"7b625bd5e08a3fff2579118cb1bfb0f02e6d6eda2e30f536589d6d7b53d87774"} Jan 28 11:42:29 crc kubenswrapper[4804]: I0128 11:42:29.428359 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Jan 28 11:42:29 crc kubenswrapper[4804]: I0128 11:42:29.453984 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=4.453962626 podStartE2EDuration="4.453962626s" podCreationTimestamp="2026-01-28 11:42:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:29.450060233 +0000 UTC m=+1225.244940217" watchObservedRunningTime="2026-01-28 11:42:29.453962626 +0000 UTC m=+1225.248842610" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.084027 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.153782 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-m7bk5"] Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.154054 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" podUID="7da1add4-521f-473c-8694-ccecf71fce93" containerName="dnsmasq-dns" containerID="cri-o://a14505e3f8f6847755a9c7ba2c0dbd679286c5c859ffe247846891adcf951765" gracePeriod=10 Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.163373 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.256047 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.309978 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.316611 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-internal-tls-certs\") pod \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.316729 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-public-tls-certs\") pod \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.316759 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-config\") pod \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.316860 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-httpd-config\") pod \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.316935 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-combined-ca-bundle\") pod \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.316964 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-ovndb-tls-certs\") pod \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.317608 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m86bw\" (UniqueName: \"kubernetes.io/projected/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-kube-api-access-m86bw\") pod \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\" (UID: \"1f4e070e-7b0f-4a60-9383-7e1a61380fc6\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.331579 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "1f4e070e-7b0f-4a60-9383-7e1a61380fc6" (UID: "1f4e070e-7b0f-4a60-9383-7e1a61380fc6"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.334350 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-kube-api-access-m86bw" (OuterVolumeSpecName: "kube-api-access-m86bw") pod "1f4e070e-7b0f-4a60-9383-7e1a61380fc6" (UID: "1f4e070e-7b0f-4a60-9383-7e1a61380fc6"). InnerVolumeSpecName "kube-api-access-m86bw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.346622 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58c46c5cc8-bpsgv" podUID="96c46652-7506-4118-a507-a5f2b6668c78" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:36822->10.217.0.161:9311: read: connection reset by peer" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.346687 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-58c46c5cc8-bpsgv" podUID="96c46652-7506-4118-a507-a5f2b6668c78" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.161:9311/healthcheck\": read tcp 10.217.0.2:36824->10.217.0.161:9311: read: connection reset by peer" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.378988 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "1f4e070e-7b0f-4a60-9383-7e1a61380fc6" (UID: "1f4e070e-7b0f-4a60-9383-7e1a61380fc6"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.380580 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-config" (OuterVolumeSpecName: "config") pod "1f4e070e-7b0f-4a60-9383-7e1a61380fc6" (UID: "1f4e070e-7b0f-4a60-9383-7e1a61380fc6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.419837 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m86bw\" (UniqueName: \"kubernetes.io/projected/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-kube-api-access-m86bw\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.419872 4804 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.419905 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.419919 4804 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.420240 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "1f4e070e-7b0f-4a60-9383-7e1a61380fc6" (UID: "1f4e070e-7b0f-4a60-9383-7e1a61380fc6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.441439 4804 generic.go:334] "Generic (PLEG): container finished" podID="7da1add4-521f-473c-8694-ccecf71fce93" containerID="a14505e3f8f6847755a9c7ba2c0dbd679286c5c859ffe247846891adcf951765" exitCode=0 Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.441549 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" event={"ID":"7da1add4-521f-473c-8694-ccecf71fce93","Type":"ContainerDied","Data":"a14505e3f8f6847755a9c7ba2c0dbd679286c5c859ffe247846891adcf951765"} Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.446481 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6b8bbc97bf-dkp56" event={"ID":"1f4e070e-7b0f-4a60-9383-7e1a61380fc6","Type":"ContainerDied","Data":"17370b3c1885a4617f22d7c91afab2fb5a7fa1af0f912912598e79c6bd36be5b"} Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.446523 4804 scope.go:117] "RemoveContainer" containerID="a77115c93ac5035e08ec037be345ec8297ca1f73ac611f0d1dfc69f51b156d7c" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.446698 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6b8bbc97bf-dkp56" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.462946 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "1f4e070e-7b0f-4a60-9383-7e1a61380fc6" (UID: "1f4e070e-7b0f-4a60-9383-7e1a61380fc6"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.463499 4804 generic.go:334] "Generic (PLEG): container finished" podID="96c46652-7506-4118-a507-a5f2b6668c78" containerID="8d2ae7c993c91ad0905eca59b162f3cd21ce7469700e79350f3b9f5b4056fdb6" exitCode=0 Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.463684 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e213f7b0-f3b8-45f6-b965-ed909114500f" containerName="cinder-scheduler" containerID="cri-o://4a4325a7f87ad4e4d3dfec8dfb9484dd1be4c16f01570f585a783d57a6a9b20e" gracePeriod=30 Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.463994 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58c46c5cc8-bpsgv" event={"ID":"96c46652-7506-4118-a507-a5f2b6668c78","Type":"ContainerDied","Data":"8d2ae7c993c91ad0905eca59b162f3cd21ce7469700e79350f3b9f5b4056fdb6"} Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.465851 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="e213f7b0-f3b8-45f6-b965-ed909114500f" containerName="probe" containerID="cri-o://6e63076fc85ddedcafa8e57fc77c689aa3dd692341cb3545cffb8e8c36341088" gracePeriod=30 Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.479115 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "1f4e070e-7b0f-4a60-9383-7e1a61380fc6" (UID: "1f4e070e-7b0f-4a60-9383-7e1a61380fc6"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.523121 4804 scope.go:117] "RemoveContainer" containerID="bf7528745919414ab5c0c5536eb5b3fc9885458114b18b78f9462eb6cff21f37" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.526111 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.526147 4804 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.526155 4804 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/1f4e070e-7b0f-4a60-9383-7e1a61380fc6-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.689421 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.787749 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6b8bbc97bf-dkp56"] Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.800119 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6b8bbc97bf-dkp56"] Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.831934 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-nb\") pod \"7da1add4-521f-473c-8694-ccecf71fce93\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.831992 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-sb\") pod \"7da1add4-521f-473c-8694-ccecf71fce93\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.832124 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-svc\") pod \"7da1add4-521f-473c-8694-ccecf71fce93\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.832167 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hj625\" (UniqueName: \"kubernetes.io/projected/7da1add4-521f-473c-8694-ccecf71fce93-kube-api-access-hj625\") pod \"7da1add4-521f-473c-8694-ccecf71fce93\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.832254 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-config\") pod \"7da1add4-521f-473c-8694-ccecf71fce93\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.832273 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-swift-storage-0\") pod \"7da1add4-521f-473c-8694-ccecf71fce93\" (UID: \"7da1add4-521f-473c-8694-ccecf71fce93\") " Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.865150 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7da1add4-521f-473c-8694-ccecf71fce93-kube-api-access-hj625" (OuterVolumeSpecName: "kube-api-access-hj625") pod "7da1add4-521f-473c-8694-ccecf71fce93" (UID: "7da1add4-521f-473c-8694-ccecf71fce93"). InnerVolumeSpecName "kube-api-access-hj625". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.908760 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "7da1add4-521f-473c-8694-ccecf71fce93" (UID: "7da1add4-521f-473c-8694-ccecf71fce93"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.920013 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "7da1add4-521f-473c-8694-ccecf71fce93" (UID: "7da1add4-521f-473c-8694-ccecf71fce93"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.934969 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" path="/var/lib/kubelet/pods/1f4e070e-7b0f-4a60-9383-7e1a61380fc6/volumes" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.936068 4804 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.936088 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.936097 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hj625\" (UniqueName: \"kubernetes.io/projected/7da1add4-521f-473c-8694-ccecf71fce93-kube-api-access-hj625\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.950458 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "7da1add4-521f-473c-8694-ccecf71fce93" (UID: "7da1add4-521f-473c-8694-ccecf71fce93"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.955645 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "7da1add4-521f-473c-8694-ccecf71fce93" (UID: "7da1add4-521f-473c-8694-ccecf71fce93"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.977033 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-config" (OuterVolumeSpecName: "config") pod "7da1add4-521f-473c-8694-ccecf71fce93" (UID: "7da1add4-521f-473c-8694-ccecf71fce93"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:30 crc kubenswrapper[4804]: I0128 11:42:30.980635 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.040621 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.040661 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.040673 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/7da1add4-521f-473c-8694-ccecf71fce93-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.141579 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-combined-ca-bundle\") pod \"96c46652-7506-4118-a507-a5f2b6668c78\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.141899 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data\") pod \"96c46652-7506-4118-a507-a5f2b6668c78\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.141964 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96c46652-7506-4118-a507-a5f2b6668c78-logs\") pod \"96c46652-7506-4118-a507-a5f2b6668c78\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.142107 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-th556\" (UniqueName: \"kubernetes.io/projected/96c46652-7506-4118-a507-a5f2b6668c78-kube-api-access-th556\") pod \"96c46652-7506-4118-a507-a5f2b6668c78\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.142140 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data-custom\") pod \"96c46652-7506-4118-a507-a5f2b6668c78\" (UID: \"96c46652-7506-4118-a507-a5f2b6668c78\") " Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.143083 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/96c46652-7506-4118-a507-a5f2b6668c78-logs" (OuterVolumeSpecName: "logs") pod "96c46652-7506-4118-a507-a5f2b6668c78" (UID: "96c46652-7506-4118-a507-a5f2b6668c78"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.145143 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96c46652-7506-4118-a507-a5f2b6668c78-kube-api-access-th556" (OuterVolumeSpecName: "kube-api-access-th556") pod "96c46652-7506-4118-a507-a5f2b6668c78" (UID: "96c46652-7506-4118-a507-a5f2b6668c78"). InnerVolumeSpecName "kube-api-access-th556". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.150053 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "96c46652-7506-4118-a507-a5f2b6668c78" (UID: "96c46652-7506-4118-a507-a5f2b6668c78"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.168221 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "96c46652-7506-4118-a507-a5f2b6668c78" (UID: "96c46652-7506-4118-a507-a5f2b6668c78"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.235426 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data" (OuterVolumeSpecName: "config-data") pod "96c46652-7506-4118-a507-a5f2b6668c78" (UID: "96c46652-7506-4118-a507-a5f2b6668c78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.244119 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.244160 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.244171 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/96c46652-7506-4118-a507-a5f2b6668c78-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.244182 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-th556\" (UniqueName: \"kubernetes.io/projected/96c46652-7506-4118-a507-a5f2b6668c78-kube-api-access-th556\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.244196 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/96c46652-7506-4118-a507-a5f2b6668c78-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.474909 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" event={"ID":"7da1add4-521f-473c-8694-ccecf71fce93","Type":"ContainerDied","Data":"613d25f46f67af98ce70f3f5abf8d934501d6069e147f6af856f94fa63cd3fb2"} Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.474932 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-688c87cc99-m7bk5" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.474973 4804 scope.go:117] "RemoveContainer" containerID="a14505e3f8f6847755a9c7ba2c0dbd679286c5c859ffe247846891adcf951765" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.487116 4804 generic.go:334] "Generic (PLEG): container finished" podID="e213f7b0-f3b8-45f6-b965-ed909114500f" containerID="6e63076fc85ddedcafa8e57fc77c689aa3dd692341cb3545cffb8e8c36341088" exitCode=0 Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.487159 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e213f7b0-f3b8-45f6-b965-ed909114500f","Type":"ContainerDied","Data":"6e63076fc85ddedcafa8e57fc77c689aa3dd692341cb3545cffb8e8c36341088"} Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.491063 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-58c46c5cc8-bpsgv" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.491308 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-58c46c5cc8-bpsgv" event={"ID":"96c46652-7506-4118-a507-a5f2b6668c78","Type":"ContainerDied","Data":"372d567d59c59f91cd799085e6762b25e3f3df3f1a5319b2592a5c634b618a46"} Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.498475 4804 scope.go:117] "RemoveContainer" containerID="e95cc363dac842375743e8314956bb8d9f168054cc0e4b1f83fe0a24457640be" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.511949 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-m7bk5"] Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.533063 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-688c87cc99-m7bk5"] Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.539852 4804 scope.go:117] "RemoveContainer" containerID="8d2ae7c993c91ad0905eca59b162f3cd21ce7469700e79350f3b9f5b4056fdb6" Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.544187 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-58c46c5cc8-bpsgv"] Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.550321 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-58c46c5cc8-bpsgv"] Jan 28 11:42:31 crc kubenswrapper[4804]: I0128 11:42:31.567311 4804 scope.go:117] "RemoveContainer" containerID="c025f7a64dc8e48661b9d0d9b892d691c7104b5918c48e37213587b764f98ac2" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.504661 4804 generic.go:334] "Generic (PLEG): container finished" podID="e213f7b0-f3b8-45f6-b965-ed909114500f" containerID="4a4325a7f87ad4e4d3dfec8dfb9484dd1be4c16f01570f585a783d57a6a9b20e" exitCode=0 Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.504991 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e213f7b0-f3b8-45f6-b965-ed909114500f","Type":"ContainerDied","Data":"4a4325a7f87ad4e4d3dfec8dfb9484dd1be4c16f01570f585a783d57a6a9b20e"} Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.647046 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.773816 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-combined-ca-bundle\") pod \"e213f7b0-f3b8-45f6-b965-ed909114500f\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.773900 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e213f7b0-f3b8-45f6-b965-ed909114500f-etc-machine-id\") pod \"e213f7b0-f3b8-45f6-b965-ed909114500f\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.773931 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data\") pod \"e213f7b0-f3b8-45f6-b965-ed909114500f\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.773982 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-scripts\") pod \"e213f7b0-f3b8-45f6-b965-ed909114500f\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.774066 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckd7l\" (UniqueName: \"kubernetes.io/projected/e213f7b0-f3b8-45f6-b965-ed909114500f-kube-api-access-ckd7l\") pod \"e213f7b0-f3b8-45f6-b965-ed909114500f\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.774093 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data-custom\") pod \"e213f7b0-f3b8-45f6-b965-ed909114500f\" (UID: \"e213f7b0-f3b8-45f6-b965-ed909114500f\") " Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.774082 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e213f7b0-f3b8-45f6-b965-ed909114500f-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "e213f7b0-f3b8-45f6-b965-ed909114500f" (UID: "e213f7b0-f3b8-45f6-b965-ed909114500f"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.774426 4804 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/e213f7b0-f3b8-45f6-b965-ed909114500f-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.781439 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-scripts" (OuterVolumeSpecName: "scripts") pod "e213f7b0-f3b8-45f6-b965-ed909114500f" (UID: "e213f7b0-f3b8-45f6-b965-ed909114500f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.793585 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e213f7b0-f3b8-45f6-b965-ed909114500f-kube-api-access-ckd7l" (OuterVolumeSpecName: "kube-api-access-ckd7l") pod "e213f7b0-f3b8-45f6-b965-ed909114500f" (UID: "e213f7b0-f3b8-45f6-b965-ed909114500f"). InnerVolumeSpecName "kube-api-access-ckd7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.793690 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "e213f7b0-f3b8-45f6-b965-ed909114500f" (UID: "e213f7b0-f3b8-45f6-b965-ed909114500f"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.823798 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "e213f7b0-f3b8-45f6-b965-ed909114500f" (UID: "e213f7b0-f3b8-45f6-b965-ed909114500f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.875578 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.875606 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.875615 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckd7l\" (UniqueName: \"kubernetes.io/projected/e213f7b0-f3b8-45f6-b965-ed909114500f-kube-api-access-ckd7l\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.875626 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.891104 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data" (OuterVolumeSpecName: "config-data") pod "e213f7b0-f3b8-45f6-b965-ed909114500f" (UID: "e213f7b0-f3b8-45f6-b965-ed909114500f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.924843 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7da1add4-521f-473c-8694-ccecf71fce93" path="/var/lib/kubelet/pods/7da1add4-521f-473c-8694-ccecf71fce93/volumes" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.925685 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96c46652-7506-4118-a507-a5f2b6668c78" path="/var/lib/kubelet/pods/96c46652-7506-4118-a507-a5f2b6668c78/volumes" Jan 28 11:42:32 crc kubenswrapper[4804]: I0128 11:42:32.977640 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e213f7b0-f3b8-45f6-b965-ed909114500f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:33 crc kubenswrapper[4804]: E0128 11:42:33.067733 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode213f7b0_f3b8_45f6_b965_ed909114500f.slice/crio-8cc18ae7a5ea3851e2236f7340657d254b4490ff0fc9f65580ef195204b81856\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode213f7b0_f3b8_45f6_b965_ed909114500f.slice\": RecentStats: unable to find data in memory cache]" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.516755 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"e213f7b0-f3b8-45f6-b965-ed909114500f","Type":"ContainerDied","Data":"8cc18ae7a5ea3851e2236f7340657d254b4490ff0fc9f65580ef195204b81856"} Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.517924 4804 scope.go:117] "RemoveContainer" containerID="6e63076fc85ddedcafa8e57fc77c689aa3dd692341cb3545cffb8e8c36341088" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.517860 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.541922 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.545359 4804 scope.go:117] "RemoveContainer" containerID="4a4325a7f87ad4e4d3dfec8dfb9484dd1be4c16f01570f585a783d57a6a9b20e" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.554205 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.575661 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:42:33 crc kubenswrapper[4804]: E0128 11:42:33.576603 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c46652-7506-4118-a507-a5f2b6668c78" containerName="barbican-api" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.576808 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c46652-7506-4118-a507-a5f2b6668c78" containerName="barbican-api" Jan 28 11:42:33 crc kubenswrapper[4804]: E0128 11:42:33.576897 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerName="neutron-httpd" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.576957 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerName="neutron-httpd" Jan 28 11:42:33 crc kubenswrapper[4804]: E0128 11:42:33.577035 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7da1add4-521f-473c-8694-ccecf71fce93" containerName="init" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.577804 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="7da1add4-521f-473c-8694-ccecf71fce93" containerName="init" Jan 28 11:42:33 crc kubenswrapper[4804]: E0128 11:42:33.577869 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="96c46652-7506-4118-a507-a5f2b6668c78" containerName="barbican-api-log" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.577971 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="96c46652-7506-4118-a507-a5f2b6668c78" containerName="barbican-api-log" Jan 28 11:42:33 crc kubenswrapper[4804]: E0128 11:42:33.578037 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e213f7b0-f3b8-45f6-b965-ed909114500f" containerName="probe" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.578099 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="e213f7b0-f3b8-45f6-b965-ed909114500f" containerName="probe" Jan 28 11:42:33 crc kubenswrapper[4804]: E0128 11:42:33.578168 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7da1add4-521f-473c-8694-ccecf71fce93" containerName="dnsmasq-dns" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.578219 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="7da1add4-521f-473c-8694-ccecf71fce93" containerName="dnsmasq-dns" Jan 28 11:42:33 crc kubenswrapper[4804]: E0128 11:42:33.578276 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerName="neutron-api" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.578324 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerName="neutron-api" Jan 28 11:42:33 crc kubenswrapper[4804]: E0128 11:42:33.578382 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e213f7b0-f3b8-45f6-b965-ed909114500f" containerName="cinder-scheduler" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.578434 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="e213f7b0-f3b8-45f6-b965-ed909114500f" containerName="cinder-scheduler" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.578744 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c46652-7506-4118-a507-a5f2b6668c78" containerName="barbican-api" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.578818 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="e213f7b0-f3b8-45f6-b965-ed909114500f" containerName="cinder-scheduler" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.578894 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerName="neutron-api" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.578951 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="7da1add4-521f-473c-8694-ccecf71fce93" containerName="dnsmasq-dns" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.579002 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="e213f7b0-f3b8-45f6-b965-ed909114500f" containerName="probe" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.579068 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="1f4e070e-7b0f-4a60-9383-7e1a61380fc6" containerName="neutron-httpd" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.579140 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="96c46652-7506-4118-a507-a5f2b6668c78" containerName="barbican-api-log" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.580791 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.585638 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.590659 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.702197 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j4htk\" (UniqueName: \"kubernetes.io/projected/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-kube-api-access-j4htk\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.702281 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.702307 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.702333 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.702369 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-scripts\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.702405 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.803995 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.804072 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.804133 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-scripts\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.804223 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.804266 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.804326 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j4htk\" (UniqueName: \"kubernetes.io/projected/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-kube-api-access-j4htk\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.804401 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.808771 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-scripts\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.808852 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.809678 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.810604 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.819903 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j4htk\" (UniqueName: \"kubernetes.io/projected/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-kube-api-access-j4htk\") pod \"cinder-scheduler-0\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.904702 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 11:42:33 crc kubenswrapper[4804]: I0128 11:42:33.919310 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.330385 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.530074 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e","Type":"ContainerStarted","Data":"4cc14b4a4b262ffd7dca6ce3a4c78be1958d2621d179512804ce0187bc8fd56e"} Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.726982 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.730507 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.735275 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.735441 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-l6dg9" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.735624 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.756952 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.826940 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.827018 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config-secret\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.827103 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-combined-ca-bundle\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.827259 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fk9c\" (UniqueName: \"kubernetes.io/projected/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-kube-api-access-5fk9c\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.934687 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.935228 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config-secret\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.935289 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-combined-ca-bundle\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.935372 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5fk9c\" (UniqueName: \"kubernetes.io/projected/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-kube-api-access-5fk9c\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.936424 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.959654 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e213f7b0-f3b8-45f6-b965-ed909114500f" path="/var/lib/kubelet/pods/e213f7b0-f3b8-45f6-b965-ed909114500f/volumes" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.970717 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5fk9c\" (UniqueName: \"kubernetes.io/projected/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-kube-api-access-5fk9c\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.978674 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config-secret\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:34 crc kubenswrapper[4804]: I0128 11:42:34.983614 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-combined-ca-bundle\") pod \"openstackclient\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " pod="openstack/openstackclient" Jan 28 11:42:35 crc kubenswrapper[4804]: I0128 11:42:35.103949 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 11:42:35 crc kubenswrapper[4804]: I0128 11:42:35.563547 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e","Type":"ContainerStarted","Data":"005c93d53e10abe220c87f4440097a40fcd2ee8a29f58966418aa864a302e6f7"} Jan 28 11:42:35 crc kubenswrapper[4804]: I0128 11:42:35.680112 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Jan 28 11:42:35 crc kubenswrapper[4804]: I0128 11:42:35.959336 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:35 crc kubenswrapper[4804]: I0128 11:42:35.960376 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:42:36 crc kubenswrapper[4804]: I0128 11:42:36.575699 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e","Type":"ContainerStarted","Data":"c1345bf2b60adbef9b806636ee3887a5869fd85c14cb9679c394104f26a95a2c"} Jan 28 11:42:36 crc kubenswrapper[4804]: I0128 11:42:36.585731 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"eaba1c3c-49d4-498e-94b8-9c8cbe8660da","Type":"ContainerStarted","Data":"8f4a06f61311546314d53868ba1af1c45d570329c2ec6e58fe2ccf8f3233f81c"} Jan 28 11:42:36 crc kubenswrapper[4804]: I0128 11:42:36.603618 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.603596762 podStartE2EDuration="3.603596762s" podCreationTimestamp="2026-01-28 11:42:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:36.595966422 +0000 UTC m=+1232.390846406" watchObservedRunningTime="2026-01-28 11:42:36.603596762 +0000 UTC m=+1232.398476736" Jan 28 11:42:38 crc kubenswrapper[4804]: I0128 11:42:38.445419 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Jan 28 11:42:38 crc kubenswrapper[4804]: I0128 11:42:38.905398 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.017896 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.018538 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="ceilometer-central-agent" containerID="cri-o://d53b727b65344f9dcbac7c3c08faebc9e0b148d77c02a88ad5ad99d02b9e26a0" gracePeriod=30 Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.019333 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="proxy-httpd" containerID="cri-o://60911acc36e6a6a2e4395ed6f280d91b95f88a16dcea51245f1032e5c6e5c6d8" gracePeriod=30 Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.019391 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="sg-core" containerID="cri-o://d9a024557230838bdc17acdb5049c5e8f2ec2a08a39b86af7c3f3e3797853125" gracePeriod=30 Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.019431 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="ceilometer-notification-agent" containerID="cri-o://2868fa37a5170a1df9710a1b7d3fc5cd16c8581466e759928b2c3aac1728ebdc" gracePeriod=30 Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.027175 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ceilometer-0" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.163:3000/\": EOF" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.365901 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/swift-proxy-59fb5cbd47-wwqmq"] Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.367348 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.370168 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-internal-svc" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.370599 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"swift-proxy-config-data" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.370939 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-swift-public-svc" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.376894 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-run-httpd\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.376940 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6vwc\" (UniqueName: \"kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-kube-api-access-z6vwc\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.376969 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-log-httpd\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.377009 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-public-tls-certs\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.377039 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-config-data\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.377063 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-combined-ca-bundle\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.377082 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-etc-swift\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.377104 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-internal-tls-certs\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.383992 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-59fb5cbd47-wwqmq"] Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.480747 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-run-httpd\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.480805 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-run-httpd\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.480862 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z6vwc\" (UniqueName: \"kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-kube-api-access-z6vwc\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.481121 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-log-httpd\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.481293 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-public-tls-certs\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.481360 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-config-data\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.481422 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-combined-ca-bundle\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.481599 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-etc-swift\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.481651 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-internal-tls-certs\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.481752 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-log-httpd\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.488212 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-internal-tls-certs\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.491465 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-config-data\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.492452 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-etc-swift\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.497861 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6vwc\" (UniqueName: \"kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-kube-api-access-z6vwc\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.505993 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-public-tls-certs\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.507374 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-combined-ca-bundle\") pod \"swift-proxy-59fb5cbd47-wwqmq\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.641029 4804 generic.go:334] "Generic (PLEG): container finished" podID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerID="60911acc36e6a6a2e4395ed6f280d91b95f88a16dcea51245f1032e5c6e5c6d8" exitCode=0 Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.641060 4804 generic.go:334] "Generic (PLEG): container finished" podID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerID="d9a024557230838bdc17acdb5049c5e8f2ec2a08a39b86af7c3f3e3797853125" exitCode=2 Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.641070 4804 generic.go:334] "Generic (PLEG): container finished" podID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerID="d53b727b65344f9dcbac7c3c08faebc9e0b148d77c02a88ad5ad99d02b9e26a0" exitCode=0 Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.641089 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5a35f4-0777-4b67-978a-ce8ab97000d4","Type":"ContainerDied","Data":"60911acc36e6a6a2e4395ed6f280d91b95f88a16dcea51245f1032e5c6e5c6d8"} Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.641112 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5a35f4-0777-4b67-978a-ce8ab97000d4","Type":"ContainerDied","Data":"d9a024557230838bdc17acdb5049c5e8f2ec2a08a39b86af7c3f3e3797853125"} Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.641121 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5a35f4-0777-4b67-978a-ce8ab97000d4","Type":"ContainerDied","Data":"d53b727b65344f9dcbac7c3c08faebc9e0b148d77c02a88ad5ad99d02b9e26a0"} Jan 28 11:42:41 crc kubenswrapper[4804]: I0128 11:42:41.690997 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:44 crc kubenswrapper[4804]: I0128 11:42:44.206581 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Jan 28 11:42:46 crc kubenswrapper[4804]: I0128 11:42:46.735463 4804 generic.go:334] "Generic (PLEG): container finished" podID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerID="2868fa37a5170a1df9710a1b7d3fc5cd16c8581466e759928b2c3aac1728ebdc" exitCode=0 Jan 28 11:42:46 crc kubenswrapper[4804]: I0128 11:42:46.735565 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5a35f4-0777-4b67-978a-ce8ab97000d4","Type":"ContainerDied","Data":"2868fa37a5170a1df9710a1b7d3fc5cd16c8581466e759928b2c3aac1728ebdc"} Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.887592 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.918343 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-log-httpd\") pod \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.918620 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88vrp\" (UniqueName: \"kubernetes.io/projected/bf5a35f4-0777-4b67-978a-ce8ab97000d4-kube-api-access-88vrp\") pod \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.918667 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-combined-ca-bundle\") pod \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.918696 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-config-data\") pod \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.918735 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-run-httpd\") pod \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.918801 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-scripts\") pod \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.918822 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-sg-core-conf-yaml\") pod \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\" (UID: \"bf5a35f4-0777-4b67-978a-ce8ab97000d4\") " Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.919141 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "bf5a35f4-0777-4b67-978a-ce8ab97000d4" (UID: "bf5a35f4-0777-4b67-978a-ce8ab97000d4"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.919945 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "bf5a35f4-0777-4b67-978a-ce8ab97000d4" (UID: "bf5a35f4-0777-4b67-978a-ce8ab97000d4"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.925728 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-scripts" (OuterVolumeSpecName: "scripts") pod "bf5a35f4-0777-4b67-978a-ce8ab97000d4" (UID: "bf5a35f4-0777-4b67-978a-ce8ab97000d4"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.927867 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf5a35f4-0777-4b67-978a-ce8ab97000d4-kube-api-access-88vrp" (OuterVolumeSpecName: "kube-api-access-88vrp") pod "bf5a35f4-0777-4b67-978a-ce8ab97000d4" (UID: "bf5a35f4-0777-4b67-978a-ce8ab97000d4"). InnerVolumeSpecName "kube-api-access-88vrp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.940853 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/swift-proxy-59fb5cbd47-wwqmq"] Jan 28 11:42:47 crc kubenswrapper[4804]: I0128 11:42:47.973052 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "bf5a35f4-0777-4b67-978a-ce8ab97000d4" (UID: "bf5a35f4-0777-4b67-978a-ce8ab97000d4"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.020143 4804 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.020188 4804 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.020202 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88vrp\" (UniqueName: \"kubernetes.io/projected/bf5a35f4-0777-4b67-978a-ce8ab97000d4-kube-api-access-88vrp\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.020216 4804 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/bf5a35f4-0777-4b67-978a-ce8ab97000d4-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.020230 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.047564 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bf5a35f4-0777-4b67-978a-ce8ab97000d4" (UID: "bf5a35f4-0777-4b67-978a-ce8ab97000d4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.052405 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-config-data" (OuterVolumeSpecName: "config-data") pod "bf5a35f4-0777-4b67-978a-ce8ab97000d4" (UID: "bf5a35f4-0777-4b67-978a-ce8ab97000d4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.121402 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.121624 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bf5a35f4-0777-4b67-978a-ce8ab97000d4-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.765667 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" event={"ID":"fcc0e969-75e0-4441-a805-7845261f1ad5","Type":"ContainerStarted","Data":"b44257342c1561f0cc777c6fe14a814950eb25277bf5e95e9adf49e4a763d6fa"} Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.765988 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" event={"ID":"fcc0e969-75e0-4441-a805-7845261f1ad5","Type":"ContainerStarted","Data":"6e4df9959650dab13d28cc4f6579b5bbae4ec71560a9da89f85150e891a84a60"} Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.766001 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" event={"ID":"fcc0e969-75e0-4441-a805-7845261f1ad5","Type":"ContainerStarted","Data":"6d2eca1ee21c2e58f6c5ebc2fd659f0e3b36f17ff8d88938be99b51b5573272e"} Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.767213 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.767243 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.769006 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"eaba1c3c-49d4-498e-94b8-9c8cbe8660da","Type":"ContainerStarted","Data":"0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67"} Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.772925 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"bf5a35f4-0777-4b67-978a-ce8ab97000d4","Type":"ContainerDied","Data":"8eea3d4a522a6cf3f69074fc2cae25b852205b216d0f0630ee0a40145a648a1d"} Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.772971 4804 scope.go:117] "RemoveContainer" containerID="60911acc36e6a6a2e4395ed6f280d91b95f88a16dcea51245f1032e5c6e5c6d8" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.773095 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.796370 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.796651 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerName="glance-log" containerID="cri-o://da91222f31b9d2c38c4e6f743c67ffcd04bf815b945ad08e5f7f9977696c9998" gracePeriod=30 Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.796810 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerName="glance-httpd" containerID="cri-o://eb26cedcbdf60c84a6ee55e21403b89acac09cab6b2379020603bb9402535d6a" gracePeriod=30 Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.809614 4804 scope.go:117] "RemoveContainer" containerID="d9a024557230838bdc17acdb5049c5e8f2ec2a08a39b86af7c3f3e3797853125" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.810005 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" podStartSLOduration=7.809983354 podStartE2EDuration="7.809983354s" podCreationTimestamp="2026-01-28 11:42:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:48.802101957 +0000 UTC m=+1244.596981941" watchObservedRunningTime="2026-01-28 11:42:48.809983354 +0000 UTC m=+1244.604863338" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.838134 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.855163 4804 scope.go:117] "RemoveContainer" containerID="2868fa37a5170a1df9710a1b7d3fc5cd16c8581466e759928b2c3aac1728ebdc" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.861687 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.884438 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:48 crc kubenswrapper[4804]: E0128 11:42:48.884867 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="proxy-httpd" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.884902 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="proxy-httpd" Jan 28 11:42:48 crc kubenswrapper[4804]: E0128 11:42:48.884925 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="ceilometer-central-agent" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.884933 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="ceilometer-central-agent" Jan 28 11:42:48 crc kubenswrapper[4804]: E0128 11:42:48.884963 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="sg-core" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.884972 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="sg-core" Jan 28 11:42:48 crc kubenswrapper[4804]: E0128 11:42:48.884986 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="ceilometer-notification-agent" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.884993 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="ceilometer-notification-agent" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.885205 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="ceilometer-notification-agent" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.885252 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="ceilometer-central-agent" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.885271 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="sg-core" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.885284 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" containerName="proxy-httpd" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.887305 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.892394 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.892520 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.896693 4804 scope.go:117] "RemoveContainer" containerID="d53b727b65344f9dcbac7c3c08faebc9e0b148d77c02a88ad5ad99d02b9e26a0" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.900637 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.764383106 podStartE2EDuration="14.900619233s" podCreationTimestamp="2026-01-28 11:42:34 +0000 UTC" firstStartedPulling="2026-01-28 11:42:35.676733068 +0000 UTC m=+1231.471613052" lastFinishedPulling="2026-01-28 11:42:47.812969195 +0000 UTC m=+1243.607849179" observedRunningTime="2026-01-28 11:42:48.858331068 +0000 UTC m=+1244.653211052" watchObservedRunningTime="2026-01-28 11:42:48.900619233 +0000 UTC m=+1244.695499217" Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.911405 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:48 crc kubenswrapper[4804]: I0128 11:42:48.949457 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf5a35f4-0777-4b67-978a-ce8ab97000d4" path="/var/lib/kubelet/pods/bf5a35f4-0777-4b67-978a-ce8ab97000d4/volumes" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.050530 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.050588 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-log-httpd\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.050738 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-scripts\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.050827 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dhx89\" (UniqueName: \"kubernetes.io/projected/20f84576-8347-4b5a-b084-17f248dba057-kube-api-access-dhx89\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.050897 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.051001 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-config-data\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.051034 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-run-httpd\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.152540 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.152604 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-config-data\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.152627 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-run-httpd\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.152703 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.152743 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-log-httpd\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.152785 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-scripts\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.152814 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dhx89\" (UniqueName: \"kubernetes.io/projected/20f84576-8347-4b5a-b084-17f248dba057-kube-api-access-dhx89\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.153601 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-run-httpd\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.153690 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-log-httpd\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.160557 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-config-data\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.161629 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.161788 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.167572 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-scripts\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.175052 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dhx89\" (UniqueName: \"kubernetes.io/projected/20f84576-8347-4b5a-b084-17f248dba057-kube-api-access-dhx89\") pod \"ceilometer-0\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.226032 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.785314 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6e31fe0-ad05-40cd-9eee-1597a421a009","Type":"ContainerDied","Data":"da91222f31b9d2c38c4e6f743c67ffcd04bf815b945ad08e5f7f9977696c9998"} Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.785269 4804 generic.go:334] "Generic (PLEG): container finished" podID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerID="da91222f31b9d2c38c4e6f743c67ffcd04bf815b945ad08e5f7f9977696c9998" exitCode=143 Jan 28 11:42:49 crc kubenswrapper[4804]: I0128 11:42:49.805541 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.012361 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-x5xnt"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.013596 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x5xnt" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.028513 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-x5xnt"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.113089 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-w8q7w"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.114762 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w8q7w" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.125759 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-w8q7w"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.176429 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-operator-scripts\") pod \"nova-api-db-create-x5xnt\" (UID: \"5e2ade0c-9218-4f08-b78f-b6b6ede461f7\") " pod="openstack/nova-api-db-create-x5xnt" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.177377 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sk5z2\" (UniqueName: \"kubernetes.io/projected/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-kube-api-access-sk5z2\") pod \"nova-api-db-create-x5xnt\" (UID: \"5e2ade0c-9218-4f08-b78f-b6b6ede461f7\") " pod="openstack/nova-api-db-create-x5xnt" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.222975 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0c6f-account-create-update-j6x65"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.225011 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0c6f-account-create-update-j6x65" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.226722 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.230758 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0c6f-account-create-update-j6x65"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.279510 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-operator-scripts\") pod \"nova-api-db-create-x5xnt\" (UID: \"5e2ade0c-9218-4f08-b78f-b6b6ede461f7\") " pod="openstack/nova-api-db-create-x5xnt" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.279556 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47a68429-2ef0-45da-8a73-62231d018738-operator-scripts\") pod \"nova-cell0-db-create-w8q7w\" (UID: \"47a68429-2ef0-45da-8a73-62231d018738\") " pod="openstack/nova-cell0-db-create-w8q7w" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.281058 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-operator-scripts\") pod \"nova-api-db-create-x5xnt\" (UID: \"5e2ade0c-9218-4f08-b78f-b6b6ede461f7\") " pod="openstack/nova-api-db-create-x5xnt" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.281231 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sk5z2\" (UniqueName: \"kubernetes.io/projected/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-kube-api-access-sk5z2\") pod \"nova-api-db-create-x5xnt\" (UID: \"5e2ade0c-9218-4f08-b78f-b6b6ede461f7\") " pod="openstack/nova-api-db-create-x5xnt" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.282034 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbntg\" (UniqueName: \"kubernetes.io/projected/47a68429-2ef0-45da-8a73-62231d018738-kube-api-access-dbntg\") pod \"nova-cell0-db-create-w8q7w\" (UID: \"47a68429-2ef0-45da-8a73-62231d018738\") " pod="openstack/nova-cell0-db-create-w8q7w" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.303337 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sk5z2\" (UniqueName: \"kubernetes.io/projected/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-kube-api-access-sk5z2\") pod \"nova-api-db-create-x5xnt\" (UID: \"5e2ade0c-9218-4f08-b78f-b6b6ede461f7\") " pod="openstack/nova-api-db-create-x5xnt" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.319567 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-mw42v"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.321211 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mw42v" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.333405 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x5xnt" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.338667 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mw42v"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.385300 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbntg\" (UniqueName: \"kubernetes.io/projected/47a68429-2ef0-45da-8a73-62231d018738-kube-api-access-dbntg\") pod \"nova-cell0-db-create-w8q7w\" (UID: \"47a68429-2ef0-45da-8a73-62231d018738\") " pod="openstack/nova-cell0-db-create-w8q7w" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.385377 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47a68429-2ef0-45da-8a73-62231d018738-operator-scripts\") pod \"nova-cell0-db-create-w8q7w\" (UID: \"47a68429-2ef0-45da-8a73-62231d018738\") " pod="openstack/nova-cell0-db-create-w8q7w" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.385510 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95tm6\" (UniqueName: \"kubernetes.io/projected/2baa2aa0-600d-4728-bb8c-7fee05022658-kube-api-access-95tm6\") pod \"nova-api-0c6f-account-create-update-j6x65\" (UID: \"2baa2aa0-600d-4728-bb8c-7fee05022658\") " pod="openstack/nova-api-0c6f-account-create-update-j6x65" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.385572 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2baa2aa0-600d-4728-bb8c-7fee05022658-operator-scripts\") pod \"nova-api-0c6f-account-create-update-j6x65\" (UID: \"2baa2aa0-600d-4728-bb8c-7fee05022658\") " pod="openstack/nova-api-0c6f-account-create-update-j6x65" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.387162 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47a68429-2ef0-45da-8a73-62231d018738-operator-scripts\") pod \"nova-cell0-db-create-w8q7w\" (UID: \"47a68429-2ef0-45da-8a73-62231d018738\") " pod="openstack/nova-cell0-db-create-w8q7w" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.425689 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbntg\" (UniqueName: \"kubernetes.io/projected/47a68429-2ef0-45da-8a73-62231d018738-kube-api-access-dbntg\") pod \"nova-cell0-db-create-w8q7w\" (UID: \"47a68429-2ef0-45da-8a73-62231d018738\") " pod="openstack/nova-cell0-db-create-w8q7w" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.433081 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w8q7w" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.442744 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-2c81-account-create-update-ldfns"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.444224 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2c81-account-create-update-ldfns" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.447114 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.479122 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2c81-account-create-update-ldfns"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.504409 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-95tm6\" (UniqueName: \"kubernetes.io/projected/2baa2aa0-600d-4728-bb8c-7fee05022658-kube-api-access-95tm6\") pod \"nova-api-0c6f-account-create-update-j6x65\" (UID: \"2baa2aa0-600d-4728-bb8c-7fee05022658\") " pod="openstack/nova-api-0c6f-account-create-update-j6x65" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.504840 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2baa2aa0-600d-4728-bb8c-7fee05022658-operator-scripts\") pod \"nova-api-0c6f-account-create-update-j6x65\" (UID: \"2baa2aa0-600d-4728-bb8c-7fee05022658\") " pod="openstack/nova-api-0c6f-account-create-update-j6x65" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.504899 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lgh64\" (UniqueName: \"kubernetes.io/projected/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-kube-api-access-lgh64\") pod \"nova-cell0-2c81-account-create-update-ldfns\" (UID: \"bf79509c-10e0-4ebc-a55d-e46f5497e2fd\") " pod="openstack/nova-cell0-2c81-account-create-update-ldfns" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.504954 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b33b00-9642-45dc-8256-5db39ca166f1-operator-scripts\") pod \"nova-cell1-db-create-mw42v\" (UID: \"18b33b00-9642-45dc-8256-5db39ca166f1\") " pod="openstack/nova-cell1-db-create-mw42v" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.504977 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-operator-scripts\") pod \"nova-cell0-2c81-account-create-update-ldfns\" (UID: \"bf79509c-10e0-4ebc-a55d-e46f5497e2fd\") " pod="openstack/nova-cell0-2c81-account-create-update-ldfns" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.505277 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j5nj\" (UniqueName: \"kubernetes.io/projected/18b33b00-9642-45dc-8256-5db39ca166f1-kube-api-access-8j5nj\") pod \"nova-cell1-db-create-mw42v\" (UID: \"18b33b00-9642-45dc-8256-5db39ca166f1\") " pod="openstack/nova-cell1-db-create-mw42v" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.509033 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2baa2aa0-600d-4728-bb8c-7fee05022658-operator-scripts\") pod \"nova-api-0c6f-account-create-update-j6x65\" (UID: \"2baa2aa0-600d-4728-bb8c-7fee05022658\") " pod="openstack/nova-api-0c6f-account-create-update-j6x65" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.526596 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-95tm6\" (UniqueName: \"kubernetes.io/projected/2baa2aa0-600d-4728-bb8c-7fee05022658-kube-api-access-95tm6\") pod \"nova-api-0c6f-account-create-update-j6x65\" (UID: \"2baa2aa0-600d-4728-bb8c-7fee05022658\") " pod="openstack/nova-api-0c6f-account-create-update-j6x65" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.542482 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.542788 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="268e1424-c22b-4694-a27b-e000fae8fc84" containerName="glance-log" containerID="cri-o://08457c942fd2dfdc67cc5ef01794dd21f74a9395ce618a5b9717e831bcb6d4af" gracePeriod=30 Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.543928 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="268e1424-c22b-4694-a27b-e000fae8fc84" containerName="glance-httpd" containerID="cri-o://4052c7d5a660ea4162a986128a1346bc3017e577b5eb525f79ff8ea498d7a5aa" gracePeriod=30 Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.646352 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lgh64\" (UniqueName: \"kubernetes.io/projected/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-kube-api-access-lgh64\") pod \"nova-cell0-2c81-account-create-update-ldfns\" (UID: \"bf79509c-10e0-4ebc-a55d-e46f5497e2fd\") " pod="openstack/nova-cell0-2c81-account-create-update-ldfns" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.646418 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b33b00-9642-45dc-8256-5db39ca166f1-operator-scripts\") pod \"nova-cell1-db-create-mw42v\" (UID: \"18b33b00-9642-45dc-8256-5db39ca166f1\") " pod="openstack/nova-cell1-db-create-mw42v" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.646437 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-operator-scripts\") pod \"nova-cell0-2c81-account-create-update-ldfns\" (UID: \"bf79509c-10e0-4ebc-a55d-e46f5497e2fd\") " pod="openstack/nova-cell0-2c81-account-create-update-ldfns" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.646526 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8j5nj\" (UniqueName: \"kubernetes.io/projected/18b33b00-9642-45dc-8256-5db39ca166f1-kube-api-access-8j5nj\") pod \"nova-cell1-db-create-mw42v\" (UID: \"18b33b00-9642-45dc-8256-5db39ca166f1\") " pod="openstack/nova-cell1-db-create-mw42v" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.648134 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b33b00-9642-45dc-8256-5db39ca166f1-operator-scripts\") pod \"nova-cell1-db-create-mw42v\" (UID: \"18b33b00-9642-45dc-8256-5db39ca166f1\") " pod="openstack/nova-cell1-db-create-mw42v" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.653384 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-operator-scripts\") pod \"nova-cell0-2c81-account-create-update-ldfns\" (UID: \"bf79509c-10e0-4ebc-a55d-e46f5497e2fd\") " pod="openstack/nova-cell0-2c81-account-create-update-ldfns" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.668669 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-7d4e-account-create-update-hrzrw"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.687766 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.688979 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7d4e-account-create-update-hrzrw"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.699364 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lgh64\" (UniqueName: \"kubernetes.io/projected/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-kube-api-access-lgh64\") pod \"nova-cell0-2c81-account-create-update-ldfns\" (UID: \"bf79509c-10e0-4ebc-a55d-e46f5497e2fd\") " pod="openstack/nova-cell0-2c81-account-create-update-ldfns" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.704318 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j5nj\" (UniqueName: \"kubernetes.io/projected/18b33b00-9642-45dc-8256-5db39ca166f1-kube-api-access-8j5nj\") pod \"nova-cell1-db-create-mw42v\" (UID: \"18b33b00-9642-45dc-8256-5db39ca166f1\") " pod="openstack/nova-cell1-db-create-mw42v" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.715364 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.800357 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0c6f-account-create-update-j6x65" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.838316 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mw42v" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.838419 4804 generic.go:334] "Generic (PLEG): container finished" podID="268e1424-c22b-4694-a27b-e000fae8fc84" containerID="08457c942fd2dfdc67cc5ef01794dd21f74a9395ce618a5b9717e831bcb6d4af" exitCode=143 Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.838507 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"268e1424-c22b-4694-a27b-e000fae8fc84","Type":"ContainerDied","Data":"08457c942fd2dfdc67cc5ef01794dd21f74a9395ce618a5b9717e831bcb6d4af"} Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.851318 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f84576-8347-4b5a-b084-17f248dba057","Type":"ContainerStarted","Data":"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d"} Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.851389 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f84576-8347-4b5a-b084-17f248dba057","Type":"ContainerStarted","Data":"d41d4c7be9d35074e4d66f189d7ceeb0f8e689b845892ee568c44e96679d5f03"} Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.863191 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99ffbce9-a3f3-4012-861a-fae498510fde-operator-scripts\") pod \"nova-cell1-7d4e-account-create-update-hrzrw\" (UID: \"99ffbce9-a3f3-4012-861a-fae498510fde\") " pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.863237 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5xk5\" (UniqueName: \"kubernetes.io/projected/99ffbce9-a3f3-4012-861a-fae498510fde-kube-api-access-p5xk5\") pod \"nova-cell1-7d4e-account-create-update-hrzrw\" (UID: \"99ffbce9-a3f3-4012-861a-fae498510fde\") " pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.868327 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-x5xnt"] Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.882473 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2c81-account-create-update-ldfns" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.968902 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99ffbce9-a3f3-4012-861a-fae498510fde-operator-scripts\") pod \"nova-cell1-7d4e-account-create-update-hrzrw\" (UID: \"99ffbce9-a3f3-4012-861a-fae498510fde\") " pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.969015 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5xk5\" (UniqueName: \"kubernetes.io/projected/99ffbce9-a3f3-4012-861a-fae498510fde-kube-api-access-p5xk5\") pod \"nova-cell1-7d4e-account-create-update-hrzrw\" (UID: \"99ffbce9-a3f3-4012-861a-fae498510fde\") " pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" Jan 28 11:42:50 crc kubenswrapper[4804]: I0128 11:42:50.973510 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99ffbce9-a3f3-4012-861a-fae498510fde-operator-scripts\") pod \"nova-cell1-7d4e-account-create-update-hrzrw\" (UID: \"99ffbce9-a3f3-4012-861a-fae498510fde\") " pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.001019 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5xk5\" (UniqueName: \"kubernetes.io/projected/99ffbce9-a3f3-4012-861a-fae498510fde-kube-api-access-p5xk5\") pod \"nova-cell1-7d4e-account-create-update-hrzrw\" (UID: \"99ffbce9-a3f3-4012-861a-fae498510fde\") " pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.080759 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.227825 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-w8q7w"] Jan 28 11:42:51 crc kubenswrapper[4804]: W0128 11:42:51.245267 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod47a68429_2ef0_45da_8a73_62231d018738.slice/crio-3303a0648c6e8b6893c683e3553bcfe11a464ab70f43d3dfd2c3af43d8aa3fa3 WatchSource:0}: Error finding container 3303a0648c6e8b6893c683e3553bcfe11a464ab70f43d3dfd2c3af43d8aa3fa3: Status 404 returned error can't find the container with id 3303a0648c6e8b6893c683e3553bcfe11a464ab70f43d3dfd2c3af43d8aa3fa3 Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.390444 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.455340 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.480151 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0c6f-account-create-update-j6x65"] Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.554989 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c6795cf88-vn4sv"] Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.563422 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c6795cf88-vn4sv" podUID="17438a34-7ac2-4451-b74e-97ebbf9318f3" containerName="neutron-httpd" containerID="cri-o://0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be" gracePeriod=30 Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.563558 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5c6795cf88-vn4sv" podUID="17438a34-7ac2-4451-b74e-97ebbf9318f3" containerName="neutron-api" containerID="cri-o://a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5" gracePeriod=30 Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.582320 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-mw42v"] Jan 28 11:42:51 crc kubenswrapper[4804]: W0128 11:42:51.594910 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod18b33b00_9642_45dc_8256_5db39ca166f1.slice/crio-94fad7cc875d46abf78ff18d783640ec9c22adc2ef3de3116b2c9c1993725204 WatchSource:0}: Error finding container 94fad7cc875d46abf78ff18d783640ec9c22adc2ef3de3116b2c9c1993725204: Status 404 returned error can't find the container with id 94fad7cc875d46abf78ff18d783640ec9c22adc2ef3de3116b2c9c1993725204 Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.697313 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-2c81-account-create-update-ldfns"] Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.751818 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-7d4e-account-create-update-hrzrw"] Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.862169 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" event={"ID":"99ffbce9-a3f3-4012-861a-fae498510fde","Type":"ContainerStarted","Data":"0662751309b7375a007976b8196a7894bedc97a3584200e6148287c968549f62"} Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.867111 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-x5xnt" event={"ID":"5e2ade0c-9218-4f08-b78f-b6b6ede461f7","Type":"ContainerStarted","Data":"975bbe68b308267dfc9049aee26ddd5b3539837326d005858cadef68fd9d4a1c"} Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.868197 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-w8q7w" event={"ID":"47a68429-2ef0-45da-8a73-62231d018738","Type":"ContainerStarted","Data":"3303a0648c6e8b6893c683e3553bcfe11a464ab70f43d3dfd2c3af43d8aa3fa3"} Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.873209 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mw42v" event={"ID":"18b33b00-9642-45dc-8256-5db39ca166f1","Type":"ContainerStarted","Data":"94fad7cc875d46abf78ff18d783640ec9c22adc2ef3de3116b2c9c1993725204"} Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.878119 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0c6f-account-create-update-j6x65" event={"ID":"2baa2aa0-600d-4728-bb8c-7fee05022658","Type":"ContainerStarted","Data":"97f2b7b646319d95351886b9e77211f399a1a8f688c3dd4fd36b85616ee21cb0"} Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.880298 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2c81-account-create-update-ldfns" event={"ID":"bf79509c-10e0-4ebc-a55d-e46f5497e2fd","Type":"ContainerStarted","Data":"333d31dec4f3b664e0752671fa75e225c60c13136f6e518709c7acd66bbc0431"} Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.980469 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.151:9292/healthcheck\": read tcp 10.217.0.2:33600->10.217.0.151:9292: read: connection reset by peer" Jan 28 11:42:51 crc kubenswrapper[4804]: I0128 11:42:51.980806 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/glance-default-external-api-0" podUID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.151:9292/healthcheck\": read tcp 10.217.0.2:33616->10.217.0.151:9292: read: connection reset by peer" Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.891863 4804 generic.go:334] "Generic (PLEG): container finished" podID="47a68429-2ef0-45da-8a73-62231d018738" containerID="d61b26c6574f005cf741e8617cfd877723c9dba4e0c0da9dc9d5ab35b7c99c44" exitCode=0 Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.892020 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-w8q7w" event={"ID":"47a68429-2ef0-45da-8a73-62231d018738","Type":"ContainerDied","Data":"d61b26c6574f005cf741e8617cfd877723c9dba4e0c0da9dc9d5ab35b7c99c44"} Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.894050 4804 generic.go:334] "Generic (PLEG): container finished" podID="17438a34-7ac2-4451-b74e-97ebbf9318f3" containerID="0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be" exitCode=0 Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.894099 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c6795cf88-vn4sv" event={"ID":"17438a34-7ac2-4451-b74e-97ebbf9318f3","Type":"ContainerDied","Data":"0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be"} Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.895081 4804 generic.go:334] "Generic (PLEG): container finished" podID="2baa2aa0-600d-4728-bb8c-7fee05022658" containerID="4396681344b1f4b062c4d3af20aad6ea83e5895641201a1d6581293d78a469d6" exitCode=0 Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.895121 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0c6f-account-create-update-j6x65" event={"ID":"2baa2aa0-600d-4728-bb8c-7fee05022658","Type":"ContainerDied","Data":"4396681344b1f4b062c4d3af20aad6ea83e5895641201a1d6581293d78a469d6"} Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.904324 4804 generic.go:334] "Generic (PLEG): container finished" podID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerID="eb26cedcbdf60c84a6ee55e21403b89acac09cab6b2379020603bb9402535d6a" exitCode=0 Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.904451 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6e31fe0-ad05-40cd-9eee-1597a421a009","Type":"ContainerDied","Data":"eb26cedcbdf60c84a6ee55e21403b89acac09cab6b2379020603bb9402535d6a"} Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.908798 4804 generic.go:334] "Generic (PLEG): container finished" podID="bf79509c-10e0-4ebc-a55d-e46f5497e2fd" containerID="00fa4f179f72ae4ed60b5277bb72d034bf25e0316d4ff2c0b245c99e5bbbb1c0" exitCode=0 Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.908874 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2c81-account-create-update-ldfns" event={"ID":"bf79509c-10e0-4ebc-a55d-e46f5497e2fd","Type":"ContainerDied","Data":"00fa4f179f72ae4ed60b5277bb72d034bf25e0316d4ff2c0b245c99e5bbbb1c0"} Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.917258 4804 generic.go:334] "Generic (PLEG): container finished" podID="18b33b00-9642-45dc-8256-5db39ca166f1" containerID="75c0ffcb0c025a38e738831b1e54d6accb5a07b7f29d2b3b100a75e69d401044" exitCode=0 Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.922199 4804 generic.go:334] "Generic (PLEG): container finished" podID="99ffbce9-a3f3-4012-861a-fae498510fde" containerID="942dab2562186e8c843d08a81baf4b10000e2f951efd28dd679bda2d6239dabc" exitCode=0 Jan 28 11:42:52 crc kubenswrapper[4804]: I0128 11:42:52.923640 4804 generic.go:334] "Generic (PLEG): container finished" podID="5e2ade0c-9218-4f08-b78f-b6b6ede461f7" containerID="1aa2852183ab3447d372d5d5e67a6b2f61d8ddd3d77cfdf97f897ca4044fdfeb" exitCode=0 Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.054732 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mw42v" event={"ID":"18b33b00-9642-45dc-8256-5db39ca166f1","Type":"ContainerDied","Data":"75c0ffcb0c025a38e738831b1e54d6accb5a07b7f29d2b3b100a75e69d401044"} Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.054782 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" event={"ID":"99ffbce9-a3f3-4012-861a-fae498510fde","Type":"ContainerDied","Data":"942dab2562186e8c843d08a81baf4b10000e2f951efd28dd679bda2d6239dabc"} Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.054794 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-x5xnt" event={"ID":"5e2ade0c-9218-4f08-b78f-b6b6ede461f7","Type":"ContainerDied","Data":"1aa2852183ab3447d372d5d5e67a6b2f61d8ddd3d77cfdf97f897ca4044fdfeb"} Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.262760 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.326773 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"c6e31fe0-ad05-40cd-9eee-1597a421a009\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.327473 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-scripts\") pod \"c6e31fe0-ad05-40cd-9eee-1597a421a009\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.327494 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-httpd-run\") pod \"c6e31fe0-ad05-40cd-9eee-1597a421a009\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.327517 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9qss5\" (UniqueName: \"kubernetes.io/projected/c6e31fe0-ad05-40cd-9eee-1597a421a009-kube-api-access-9qss5\") pod \"c6e31fe0-ad05-40cd-9eee-1597a421a009\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.327650 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-logs\") pod \"c6e31fe0-ad05-40cd-9eee-1597a421a009\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.327696 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-public-tls-certs\") pod \"c6e31fe0-ad05-40cd-9eee-1597a421a009\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.327720 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-combined-ca-bundle\") pod \"c6e31fe0-ad05-40cd-9eee-1597a421a009\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.327753 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-config-data\") pod \"c6e31fe0-ad05-40cd-9eee-1597a421a009\" (UID: \"c6e31fe0-ad05-40cd-9eee-1597a421a009\") " Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.331141 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "c6e31fe0-ad05-40cd-9eee-1597a421a009" (UID: "c6e31fe0-ad05-40cd-9eee-1597a421a009"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.332091 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-logs" (OuterVolumeSpecName: "logs") pod "c6e31fe0-ad05-40cd-9eee-1597a421a009" (UID: "c6e31fe0-ad05-40cd-9eee-1597a421a009"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.332401 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "c6e31fe0-ad05-40cd-9eee-1597a421a009" (UID: "c6e31fe0-ad05-40cd-9eee-1597a421a009"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.332506 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6e31fe0-ad05-40cd-9eee-1597a421a009-kube-api-access-9qss5" (OuterVolumeSpecName: "kube-api-access-9qss5") pod "c6e31fe0-ad05-40cd-9eee-1597a421a009" (UID: "c6e31fe0-ad05-40cd-9eee-1597a421a009"). InnerVolumeSpecName "kube-api-access-9qss5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.335316 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-scripts" (OuterVolumeSpecName: "scripts") pod "c6e31fe0-ad05-40cd-9eee-1597a421a009" (UID: "c6e31fe0-ad05-40cd-9eee-1597a421a009"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.372164 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6e31fe0-ad05-40cd-9eee-1597a421a009" (UID: "c6e31fe0-ad05-40cd-9eee-1597a421a009"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.429456 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.429511 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.429525 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.429538 4804 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.429551 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9qss5\" (UniqueName: \"kubernetes.io/projected/c6e31fe0-ad05-40cd-9eee-1597a421a009-kube-api-access-9qss5\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.429564 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c6e31fe0-ad05-40cd-9eee-1597a421a009-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.458691 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-config-data" (OuterVolumeSpecName: "config-data") pod "c6e31fe0-ad05-40cd-9eee-1597a421a009" (UID: "c6e31fe0-ad05-40cd-9eee-1597a421a009"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.469756 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.476359 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c6e31fe0-ad05-40cd-9eee-1597a421a009" (UID: "c6e31fe0-ad05-40cd-9eee-1597a421a009"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.530780 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.530805 4804 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.530816 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c6e31fe0-ad05-40cd-9eee-1597a421a009-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.955134 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"c6e31fe0-ad05-40cd-9eee-1597a421a009","Type":"ContainerDied","Data":"77deabf65e4246979130b557c75fc43e2d7873b2dc124e7c3da74d90778d94aa"} Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.955504 4804 scope.go:117] "RemoveContainer" containerID="eb26cedcbdf60c84a6ee55e21403b89acac09cab6b2379020603bb9402535d6a" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.955676 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:42:53 crc kubenswrapper[4804]: I0128 11:42:53.998339 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f84576-8347-4b5a-b084-17f248dba057","Type":"ContainerStarted","Data":"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29"} Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.072962 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.103008 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.129863 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:42:54 crc kubenswrapper[4804]: E0128 11:42:54.130464 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerName="glance-httpd" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.130486 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerName="glance-httpd" Jan 28 11:42:54 crc kubenswrapper[4804]: E0128 11:42:54.130515 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerName="glance-log" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.130523 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerName="glance-log" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.130753 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerName="glance-log" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.130777 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6e31fe0-ad05-40cd-9eee-1597a421a009" containerName="glance-httpd" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.131863 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.137342 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.137424 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.168021 4804 scope.go:117] "RemoveContainer" containerID="da91222f31b9d2c38c4e6f743c67ffcd04bf815b945ad08e5f7f9977696c9998" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.175094 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.266495 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.266872 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.266920 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-scripts\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.266962 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.267019 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.267068 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-logs\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.267091 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qshqj\" (UniqueName: \"kubernetes.io/projected/5198da96-d6b6-4b80-bb93-838dff10730e-kube-api-access-qshqj\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.267120 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-config-data\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.368822 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.369511 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.369541 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-scripts\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.369583 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.369640 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.369689 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-logs\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.369710 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qshqj\" (UniqueName: \"kubernetes.io/projected/5198da96-d6b6-4b80-bb93-838dff10730e-kube-api-access-qshqj\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.369738 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-config-data\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.370002 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.370122 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.370392 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-logs\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.375859 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.377311 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-scripts\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.379707 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.419333 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-config-data\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.422407 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qshqj\" (UniqueName: \"kubernetes.io/projected/5198da96-d6b6-4b80-bb93-838dff10730e-kube-api-access-qshqj\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.439071 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-external-api-0\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.456377 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.611683 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.678161 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99ffbce9-a3f3-4012-861a-fae498510fde-operator-scripts\") pod \"99ffbce9-a3f3-4012-861a-fae498510fde\" (UID: \"99ffbce9-a3f3-4012-861a-fae498510fde\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.678460 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5xk5\" (UniqueName: \"kubernetes.io/projected/99ffbce9-a3f3-4012-861a-fae498510fde-kube-api-access-p5xk5\") pod \"99ffbce9-a3f3-4012-861a-fae498510fde\" (UID: \"99ffbce9-a3f3-4012-861a-fae498510fde\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.679414 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/99ffbce9-a3f3-4012-861a-fae498510fde-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "99ffbce9-a3f3-4012-861a-fae498510fde" (UID: "99ffbce9-a3f3-4012-861a-fae498510fde"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.683774 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99ffbce9-a3f3-4012-861a-fae498510fde-kube-api-access-p5xk5" (OuterVolumeSpecName: "kube-api-access-p5xk5") pod "99ffbce9-a3f3-4012-861a-fae498510fde" (UID: "99ffbce9-a3f3-4012-861a-fae498510fde"). InnerVolumeSpecName "kube-api-access-p5xk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.781004 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5xk5\" (UniqueName: \"kubernetes.io/projected/99ffbce9-a3f3-4012-861a-fae498510fde-kube-api-access-p5xk5\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.781042 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/99ffbce9-a3f3-4012-861a-fae498510fde-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.801901 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0c6f-account-create-update-j6x65" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.883444 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95tm6\" (UniqueName: \"kubernetes.io/projected/2baa2aa0-600d-4728-bb8c-7fee05022658-kube-api-access-95tm6\") pod \"2baa2aa0-600d-4728-bb8c-7fee05022658\" (UID: \"2baa2aa0-600d-4728-bb8c-7fee05022658\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.883812 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2baa2aa0-600d-4728-bb8c-7fee05022658-operator-scripts\") pod \"2baa2aa0-600d-4728-bb8c-7fee05022658\" (UID: \"2baa2aa0-600d-4728-bb8c-7fee05022658\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.884572 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2baa2aa0-600d-4728-bb8c-7fee05022658-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2baa2aa0-600d-4728-bb8c-7fee05022658" (UID: "2baa2aa0-600d-4728-bb8c-7fee05022658"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.890296 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2baa2aa0-600d-4728-bb8c-7fee05022658-kube-api-access-95tm6" (OuterVolumeSpecName: "kube-api-access-95tm6") pod "2baa2aa0-600d-4728-bb8c-7fee05022658" (UID: "2baa2aa0-600d-4728-bb8c-7fee05022658"). InnerVolumeSpecName "kube-api-access-95tm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.913520 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2c81-account-create-update-ldfns" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.920631 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mw42v" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.949785 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6e31fe0-ad05-40cd-9eee-1597a421a009" path="/var/lib/kubelet/pods/c6e31fe0-ad05-40cd-9eee-1597a421a009/volumes" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.951804 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w8q7w" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.960498 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x5xnt" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.990914 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b33b00-9642-45dc-8256-5db39ca166f1-operator-scripts\") pod \"18b33b00-9642-45dc-8256-5db39ca166f1\" (UID: \"18b33b00-9642-45dc-8256-5db39ca166f1\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.990985 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-operator-scripts\") pod \"bf79509c-10e0-4ebc-a55d-e46f5497e2fd\" (UID: \"bf79509c-10e0-4ebc-a55d-e46f5497e2fd\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.991256 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47a68429-2ef0-45da-8a73-62231d018738-operator-scripts\") pod \"47a68429-2ef0-45da-8a73-62231d018738\" (UID: \"47a68429-2ef0-45da-8a73-62231d018738\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.991432 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-operator-scripts\") pod \"5e2ade0c-9218-4f08-b78f-b6b6ede461f7\" (UID: \"5e2ade0c-9218-4f08-b78f-b6b6ede461f7\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.991485 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8j5nj\" (UniqueName: \"kubernetes.io/projected/18b33b00-9642-45dc-8256-5db39ca166f1-kube-api-access-8j5nj\") pod \"18b33b00-9642-45dc-8256-5db39ca166f1\" (UID: \"18b33b00-9642-45dc-8256-5db39ca166f1\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.991548 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lgh64\" (UniqueName: \"kubernetes.io/projected/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-kube-api-access-lgh64\") pod \"bf79509c-10e0-4ebc-a55d-e46f5497e2fd\" (UID: \"bf79509c-10e0-4ebc-a55d-e46f5497e2fd\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.991620 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbntg\" (UniqueName: \"kubernetes.io/projected/47a68429-2ef0-45da-8a73-62231d018738-kube-api-access-dbntg\") pod \"47a68429-2ef0-45da-8a73-62231d018738\" (UID: \"47a68429-2ef0-45da-8a73-62231d018738\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.991647 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sk5z2\" (UniqueName: \"kubernetes.io/projected/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-kube-api-access-sk5z2\") pod \"5e2ade0c-9218-4f08-b78f-b6b6ede461f7\" (UID: \"5e2ade0c-9218-4f08-b78f-b6b6ede461f7\") " Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.991763 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18b33b00-9642-45dc-8256-5db39ca166f1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "18b33b00-9642-45dc-8256-5db39ca166f1" (UID: "18b33b00-9642-45dc-8256-5db39ca166f1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.992126 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "5e2ade0c-9218-4f08-b78f-b6b6ede461f7" (UID: "5e2ade0c-9218-4f08-b78f-b6b6ede461f7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.992433 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/18b33b00-9642-45dc-8256-5db39ca166f1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.992447 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2baa2aa0-600d-4728-bb8c-7fee05022658-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.992458 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95tm6\" (UniqueName: \"kubernetes.io/projected/2baa2aa0-600d-4728-bb8c-7fee05022658-kube-api-access-95tm6\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.992468 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.992820 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "bf79509c-10e0-4ebc-a55d-e46f5497e2fd" (UID: "bf79509c-10e0-4ebc-a55d-e46f5497e2fd"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:54 crc kubenswrapper[4804]: I0128 11:42:54.993227 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/47a68429-2ef0-45da-8a73-62231d018738-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "47a68429-2ef0-45da-8a73-62231d018738" (UID: "47a68429-2ef0-45da-8a73-62231d018738"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:54.999094 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-kube-api-access-sk5z2" (OuterVolumeSpecName: "kube-api-access-sk5z2") pod "5e2ade0c-9218-4f08-b78f-b6b6ede461f7" (UID: "5e2ade0c-9218-4f08-b78f-b6b6ede461f7"). InnerVolumeSpecName "kube-api-access-sk5z2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.002032 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/47a68429-2ef0-45da-8a73-62231d018738-kube-api-access-dbntg" (OuterVolumeSpecName: "kube-api-access-dbntg") pod "47a68429-2ef0-45da-8a73-62231d018738" (UID: "47a68429-2ef0-45da-8a73-62231d018738"). InnerVolumeSpecName "kube-api-access-dbntg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.005565 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-kube-api-access-lgh64" (OuterVolumeSpecName: "kube-api-access-lgh64") pod "bf79509c-10e0-4ebc-a55d-e46f5497e2fd" (UID: "bf79509c-10e0-4ebc-a55d-e46f5497e2fd"). InnerVolumeSpecName "kube-api-access-lgh64". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.013955 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18b33b00-9642-45dc-8256-5db39ca166f1-kube-api-access-8j5nj" (OuterVolumeSpecName: "kube-api-access-8j5nj") pod "18b33b00-9642-45dc-8256-5db39ca166f1" (UID: "18b33b00-9642-45dc-8256-5db39ca166f1"). InnerVolumeSpecName "kube-api-access-8j5nj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.045515 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f84576-8347-4b5a-b084-17f248dba057","Type":"ContainerStarted","Data":"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b"} Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.062771 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-w8q7w" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.062966 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-w8q7w" event={"ID":"47a68429-2ef0-45da-8a73-62231d018738","Type":"ContainerDied","Data":"3303a0648c6e8b6893c683e3553bcfe11a464ab70f43d3dfd2c3af43d8aa3fa3"} Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.063002 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3303a0648c6e8b6893c683e3553bcfe11a464ab70f43d3dfd2c3af43d8aa3fa3" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.090461 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-2c81-account-create-update-ldfns" event={"ID":"bf79509c-10e0-4ebc-a55d-e46f5497e2fd","Type":"ContainerDied","Data":"333d31dec4f3b664e0752671fa75e225c60c13136f6e518709c7acd66bbc0431"} Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.090508 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="333d31dec4f3b664e0752671fa75e225c60c13136f6e518709c7acd66bbc0431" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.090584 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-2c81-account-create-update-ldfns" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.104419 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8j5nj\" (UniqueName: \"kubernetes.io/projected/18b33b00-9642-45dc-8256-5db39ca166f1-kube-api-access-8j5nj\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.104466 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lgh64\" (UniqueName: \"kubernetes.io/projected/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-kube-api-access-lgh64\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.104481 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbntg\" (UniqueName: \"kubernetes.io/projected/47a68429-2ef0-45da-8a73-62231d018738-kube-api-access-dbntg\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.105738 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sk5z2\" (UniqueName: \"kubernetes.io/projected/5e2ade0c-9218-4f08-b78f-b6b6ede461f7-kube-api-access-sk5z2\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.105770 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/bf79509c-10e0-4ebc-a55d-e46f5497e2fd-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.105785 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/47a68429-2ef0-45da-8a73-62231d018738-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.112632 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0c6f-account-create-update-j6x65" event={"ID":"2baa2aa0-600d-4728-bb8c-7fee05022658","Type":"ContainerDied","Data":"97f2b7b646319d95351886b9e77211f399a1a8f688c3dd4fd36b85616ee21cb0"} Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.112676 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97f2b7b646319d95351886b9e77211f399a1a8f688c3dd4fd36b85616ee21cb0" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.112747 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0c6f-account-create-update-j6x65" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.120826 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" event={"ID":"99ffbce9-a3f3-4012-861a-fae498510fde","Type":"ContainerDied","Data":"0662751309b7375a007976b8196a7894bedc97a3584200e6148287c968549f62"} Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.120869 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0662751309b7375a007976b8196a7894bedc97a3584200e6148287c968549f62" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.120969 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-7d4e-account-create-update-hrzrw" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.152578 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-x5xnt" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.152912 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-x5xnt" event={"ID":"5e2ade0c-9218-4f08-b78f-b6b6ede461f7","Type":"ContainerDied","Data":"975bbe68b308267dfc9049aee26ddd5b3539837326d005858cadef68fd9d4a1c"} Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.152937 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="975bbe68b308267dfc9049aee26ddd5b3539837326d005858cadef68fd9d4a1c" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.176224 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-mw42v" event={"ID":"18b33b00-9642-45dc-8256-5db39ca166f1","Type":"ContainerDied","Data":"94fad7cc875d46abf78ff18d783640ec9c22adc2ef3de3116b2c9c1993725204"} Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.176304 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94fad7cc875d46abf78ff18d783640ec9c22adc2ef3de3116b2c9c1993725204" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.176378 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-mw42v" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.195902 4804 generic.go:334] "Generic (PLEG): container finished" podID="268e1424-c22b-4694-a27b-e000fae8fc84" containerID="4052c7d5a660ea4162a986128a1346bc3017e577b5eb525f79ff8ea498d7a5aa" exitCode=0 Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.195960 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"268e1424-c22b-4694-a27b-e000fae8fc84","Type":"ContainerDied","Data":"4052c7d5a660ea4162a986128a1346bc3017e577b5eb525f79ff8ea498d7a5aa"} Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.222561 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.309754 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-httpd-run\") pod \"268e1424-c22b-4694-a27b-e000fae8fc84\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.309867 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bgqsx\" (UniqueName: \"kubernetes.io/projected/268e1424-c22b-4694-a27b-e000fae8fc84-kube-api-access-bgqsx\") pod \"268e1424-c22b-4694-a27b-e000fae8fc84\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.309916 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-config-data\") pod \"268e1424-c22b-4694-a27b-e000fae8fc84\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.310006 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-scripts\") pod \"268e1424-c22b-4694-a27b-e000fae8fc84\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.310103 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-logs\") pod \"268e1424-c22b-4694-a27b-e000fae8fc84\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.310179 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-internal-tls-certs\") pod \"268e1424-c22b-4694-a27b-e000fae8fc84\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.310206 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"268e1424-c22b-4694-a27b-e000fae8fc84\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.310332 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-combined-ca-bundle\") pod \"268e1424-c22b-4694-a27b-e000fae8fc84\" (UID: \"268e1424-c22b-4694-a27b-e000fae8fc84\") " Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.314171 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "268e1424-c22b-4694-a27b-e000fae8fc84" (UID: "268e1424-c22b-4694-a27b-e000fae8fc84"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.314575 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-logs" (OuterVolumeSpecName: "logs") pod "268e1424-c22b-4694-a27b-e000fae8fc84" (UID: "268e1424-c22b-4694-a27b-e000fae8fc84"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.319156 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "268e1424-c22b-4694-a27b-e000fae8fc84" (UID: "268e1424-c22b-4694-a27b-e000fae8fc84"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.320184 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/268e1424-c22b-4694-a27b-e000fae8fc84-kube-api-access-bgqsx" (OuterVolumeSpecName: "kube-api-access-bgqsx") pod "268e1424-c22b-4694-a27b-e000fae8fc84" (UID: "268e1424-c22b-4694-a27b-e000fae8fc84"). InnerVolumeSpecName "kube-api-access-bgqsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.348387 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-scripts" (OuterVolumeSpecName: "scripts") pod "268e1424-c22b-4694-a27b-e000fae8fc84" (UID: "268e1424-c22b-4694-a27b-e000fae8fc84"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.356255 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "268e1424-c22b-4694-a27b-e000fae8fc84" (UID: "268e1424-c22b-4694-a27b-e000fae8fc84"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.395456 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "268e1424-c22b-4694-a27b-e000fae8fc84" (UID: "268e1424-c22b-4694-a27b-e000fae8fc84"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.415492 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.415522 4804 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.415531 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bgqsx\" (UniqueName: \"kubernetes.io/projected/268e1424-c22b-4694-a27b-e000fae8fc84-kube-api-access-bgqsx\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.415541 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.415549 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/268e1424-c22b-4694-a27b-e000fae8fc84-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.415559 4804 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.415589 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.420291 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-config-data" (OuterVolumeSpecName: "config-data") pod "268e1424-c22b-4694-a27b-e000fae8fc84" (UID: "268e1424-c22b-4694-a27b-e000fae8fc84"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.477047 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.494275 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.517593 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.517628 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/268e1424-c22b-4694-a27b-e000fae8fc84-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:55 crc kubenswrapper[4804]: W0128 11:42:55.532051 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5198da96_d6b6_4b80_bb93_838dff10730e.slice/crio-58bcb13d20697d7aea6b95393a84cbc41032eeedc92a545190a9ec6f060f3919 WatchSource:0}: Error finding container 58bcb13d20697d7aea6b95393a84cbc41032eeedc92a545190a9ec6f060f3919: Status 404 returned error can't find the container with id 58bcb13d20697d7aea6b95393a84cbc41032eeedc92a545190a9ec6f060f3919 Jan 28 11:42:55 crc kubenswrapper[4804]: I0128 11:42:55.918213 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.027827 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fv5jh\" (UniqueName: \"kubernetes.io/projected/17438a34-7ac2-4451-b74e-97ebbf9318f3-kube-api-access-fv5jh\") pod \"17438a34-7ac2-4451-b74e-97ebbf9318f3\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.028187 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-config\") pod \"17438a34-7ac2-4451-b74e-97ebbf9318f3\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.028222 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-combined-ca-bundle\") pod \"17438a34-7ac2-4451-b74e-97ebbf9318f3\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.028283 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-httpd-config\") pod \"17438a34-7ac2-4451-b74e-97ebbf9318f3\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.028386 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-ovndb-tls-certs\") pod \"17438a34-7ac2-4451-b74e-97ebbf9318f3\" (UID: \"17438a34-7ac2-4451-b74e-97ebbf9318f3\") " Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.034861 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/17438a34-7ac2-4451-b74e-97ebbf9318f3-kube-api-access-fv5jh" (OuterVolumeSpecName: "kube-api-access-fv5jh") pod "17438a34-7ac2-4451-b74e-97ebbf9318f3" (UID: "17438a34-7ac2-4451-b74e-97ebbf9318f3"). InnerVolumeSpecName "kube-api-access-fv5jh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.035627 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "17438a34-7ac2-4451-b74e-97ebbf9318f3" (UID: "17438a34-7ac2-4451-b74e-97ebbf9318f3"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.114792 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-config" (OuterVolumeSpecName: "config") pod "17438a34-7ac2-4451-b74e-97ebbf9318f3" (UID: "17438a34-7ac2-4451-b74e-97ebbf9318f3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.126645 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "17438a34-7ac2-4451-b74e-97ebbf9318f3" (UID: "17438a34-7ac2-4451-b74e-97ebbf9318f3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.130380 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv5jh\" (UniqueName: \"kubernetes.io/projected/17438a34-7ac2-4451-b74e-97ebbf9318f3-kube-api-access-fv5jh\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.130416 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.130430 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.130443 4804 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.177866 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "17438a34-7ac2-4451-b74e-97ebbf9318f3" (UID: "17438a34-7ac2-4451-b74e-97ebbf9318f3"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.214168 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5198da96-d6b6-4b80-bb93-838dff10730e","Type":"ContainerStarted","Data":"58bcb13d20697d7aea6b95393a84cbc41032eeedc92a545190a9ec6f060f3919"} Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.215684 4804 generic.go:334] "Generic (PLEG): container finished" podID="17438a34-7ac2-4451-b74e-97ebbf9318f3" containerID="a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5" exitCode=0 Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.215738 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c6795cf88-vn4sv" event={"ID":"17438a34-7ac2-4451-b74e-97ebbf9318f3","Type":"ContainerDied","Data":"a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5"} Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.215762 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5c6795cf88-vn4sv" event={"ID":"17438a34-7ac2-4451-b74e-97ebbf9318f3","Type":"ContainerDied","Data":"13867d6cc190021437c374394c8fea3e953c59a7abb2355a73dc4ecc5ca39b58"} Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.215784 4804 scope.go:117] "RemoveContainer" containerID="0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.216001 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5c6795cf88-vn4sv" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.228218 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"268e1424-c22b-4694-a27b-e000fae8fc84","Type":"ContainerDied","Data":"69238578a45f6424f2874038dcb7535af5f39f1f664e37959ae69aa2b648befa"} Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.228325 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.232390 4804 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/17438a34-7ac2-4451-b74e-97ebbf9318f3-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.254351 4804 scope.go:117] "RemoveContainer" containerID="a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.265173 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5c6795cf88-vn4sv"] Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.280232 4804 scope.go:117] "RemoveContainer" containerID="0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be" Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.281714 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be\": container with ID starting with 0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be not found: ID does not exist" containerID="0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.281750 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be"} err="failed to get container status \"0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be\": rpc error: code = NotFound desc = could not find container \"0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be\": container with ID starting with 0acb4f58cedfc038e116ee700ebb0ff14ccc41a4403c6cd3688234d2aabc05be not found: ID does not exist" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.281775 4804 scope.go:117] "RemoveContainer" containerID="a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5" Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.282060 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5\": container with ID starting with a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5 not found: ID does not exist" containerID="a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.282076 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5"} err="failed to get container status \"a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5\": rpc error: code = NotFound desc = could not find container \"a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5\": container with ID starting with a5a8c25c6f1054eb18f5f845b47acdfa87d6db36ddfd466e87224616b24202f5 not found: ID does not exist" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.282088 4804 scope.go:117] "RemoveContainer" containerID="4052c7d5a660ea4162a986128a1346bc3017e577b5eb525f79ff8ea498d7a5aa" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.286934 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5c6795cf88-vn4sv"] Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.325209 4804 scope.go:117] "RemoveContainer" containerID="08457c942fd2dfdc67cc5ef01794dd21f74a9395ce618a5b9717e831bcb6d4af" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.330875 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.346723 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.358686 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.359147 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="47a68429-2ef0-45da-8a73-62231d018738" containerName="mariadb-database-create" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359162 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="47a68429-2ef0-45da-8a73-62231d018738" containerName="mariadb-database-create" Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.359173 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="268e1424-c22b-4694-a27b-e000fae8fc84" containerName="glance-httpd" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359182 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="268e1424-c22b-4694-a27b-e000fae8fc84" containerName="glance-httpd" Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.359207 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ffbce9-a3f3-4012-861a-fae498510fde" containerName="mariadb-account-create-update" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359215 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ffbce9-a3f3-4012-861a-fae498510fde" containerName="mariadb-account-create-update" Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.359233 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="18b33b00-9642-45dc-8256-5db39ca166f1" containerName="mariadb-database-create" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359240 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="18b33b00-9642-45dc-8256-5db39ca166f1" containerName="mariadb-database-create" Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.359251 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf79509c-10e0-4ebc-a55d-e46f5497e2fd" containerName="mariadb-account-create-update" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359258 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf79509c-10e0-4ebc-a55d-e46f5497e2fd" containerName="mariadb-account-create-update" Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.359270 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="268e1424-c22b-4694-a27b-e000fae8fc84" containerName="glance-log" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359277 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="268e1424-c22b-4694-a27b-e000fae8fc84" containerName="glance-log" Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.359292 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17438a34-7ac2-4451-b74e-97ebbf9318f3" containerName="neutron-httpd" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359299 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="17438a34-7ac2-4451-b74e-97ebbf9318f3" containerName="neutron-httpd" Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.359312 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2baa2aa0-600d-4728-bb8c-7fee05022658" containerName="mariadb-account-create-update" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359319 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="2baa2aa0-600d-4728-bb8c-7fee05022658" containerName="mariadb-account-create-update" Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.359333 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="17438a34-7ac2-4451-b74e-97ebbf9318f3" containerName="neutron-api" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359340 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="17438a34-7ac2-4451-b74e-97ebbf9318f3" containerName="neutron-api" Jan 28 11:42:56 crc kubenswrapper[4804]: E0128 11:42:56.359359 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5e2ade0c-9218-4f08-b78f-b6b6ede461f7" containerName="mariadb-database-create" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359368 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="5e2ade0c-9218-4f08-b78f-b6b6ede461f7" containerName="mariadb-database-create" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359564 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="17438a34-7ac2-4451-b74e-97ebbf9318f3" containerName="neutron-api" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359579 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="17438a34-7ac2-4451-b74e-97ebbf9318f3" containerName="neutron-httpd" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359594 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="99ffbce9-a3f3-4012-861a-fae498510fde" containerName="mariadb-account-create-update" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359606 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="268e1424-c22b-4694-a27b-e000fae8fc84" containerName="glance-httpd" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359620 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="5e2ade0c-9218-4f08-b78f-b6b6ede461f7" containerName="mariadb-database-create" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359639 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="2baa2aa0-600d-4728-bb8c-7fee05022658" containerName="mariadb-account-create-update" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359652 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="268e1424-c22b-4694-a27b-e000fae8fc84" containerName="glance-log" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359664 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="18b33b00-9642-45dc-8256-5db39ca166f1" containerName="mariadb-database-create" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359674 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="47a68429-2ef0-45da-8a73-62231d018738" containerName="mariadb-database-create" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.359686 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf79509c-10e0-4ebc-a55d-e46f5497e2fd" containerName="mariadb-account-create-update" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.360857 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.366753 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.368009 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.373104 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.537732 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.538249 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.538371 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.538530 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.538773 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-logs\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.539107 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbx7g\" (UniqueName: \"kubernetes.io/projected/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-kube-api-access-qbx7g\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.539164 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.539260 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.641465 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-logs\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.641594 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbx7g\" (UniqueName: \"kubernetes.io/projected/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-kube-api-access-qbx7g\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.641622 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.641672 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.641774 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.641798 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.641825 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.641864 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.642042 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-logs\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.642808 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.642814 4804 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.648674 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.648827 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.653477 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.667410 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.671672 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbx7g\" (UniqueName: \"kubernetes.io/projected/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-kube-api-access-qbx7g\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.687661 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-0\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " pod="openstack/glance-default-internal-api-0" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.710357 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.720678 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:42:56 crc kubenswrapper[4804]: I0128 11:42:56.812044 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:42:57 crc kubenswrapper[4804]: I0128 11:42:57.030062 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="17438a34-7ac2-4451-b74e-97ebbf9318f3" path="/var/lib/kubelet/pods/17438a34-7ac2-4451-b74e-97ebbf9318f3/volumes" Jan 28 11:42:57 crc kubenswrapper[4804]: I0128 11:42:57.031097 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="268e1424-c22b-4694-a27b-e000fae8fc84" path="/var/lib/kubelet/pods/268e1424-c22b-4694-a27b-e000fae8fc84/volumes" Jan 28 11:42:57 crc kubenswrapper[4804]: I0128 11:42:57.289309 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5198da96-d6b6-4b80-bb93-838dff10730e","Type":"ContainerStarted","Data":"49f9909506d8a2c0b51ffca97a1e1ce6efc0b0acde0bbf32f3a77e33e0c7d096"} Jan 28 11:42:57 crc kubenswrapper[4804]: I0128 11:42:57.321308 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="ceilometer-central-agent" containerID="cri-o://92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d" gracePeriod=30 Jan 28 11:42:57 crc kubenswrapper[4804]: I0128 11:42:57.321598 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f84576-8347-4b5a-b084-17f248dba057","Type":"ContainerStarted","Data":"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b"} Jan 28 11:42:57 crc kubenswrapper[4804]: I0128 11:42:57.321653 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 11:42:57 crc kubenswrapper[4804]: I0128 11:42:57.321671 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="proxy-httpd" containerID="cri-o://42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b" gracePeriod=30 Jan 28 11:42:57 crc kubenswrapper[4804]: I0128 11:42:57.321741 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="sg-core" containerID="cri-o://286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b" gracePeriod=30 Jan 28 11:42:57 crc kubenswrapper[4804]: I0128 11:42:57.321828 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="ceilometer-notification-agent" containerID="cri-o://82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29" gracePeriod=30 Jan 28 11:42:57 crc kubenswrapper[4804]: I0128 11:42:57.353342 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.976554887 podStartE2EDuration="9.353324556s" podCreationTimestamp="2026-01-28 11:42:48 +0000 UTC" firstStartedPulling="2026-01-28 11:42:49.806692055 +0000 UTC m=+1245.601572039" lastFinishedPulling="2026-01-28 11:42:56.183461724 +0000 UTC m=+1251.978341708" observedRunningTime="2026-01-28 11:42:57.346985167 +0000 UTC m=+1253.141865181" watchObservedRunningTime="2026-01-28 11:42:57.353324556 +0000 UTC m=+1253.148204540" Jan 28 11:42:57 crc kubenswrapper[4804]: I0128 11:42:57.593563 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.265284 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.369830 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f","Type":"ContainerStarted","Data":"53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001"} Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.382329 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-scripts\") pod \"20f84576-8347-4b5a-b084-17f248dba057\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.382421 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-run-httpd\") pod \"20f84576-8347-4b5a-b084-17f248dba057\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.382498 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-config-data\") pod \"20f84576-8347-4b5a-b084-17f248dba057\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.382628 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dhx89\" (UniqueName: \"kubernetes.io/projected/20f84576-8347-4b5a-b084-17f248dba057-kube-api-access-dhx89\") pod \"20f84576-8347-4b5a-b084-17f248dba057\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.382770 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-sg-core-conf-yaml\") pod \"20f84576-8347-4b5a-b084-17f248dba057\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.382806 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-combined-ca-bundle\") pod \"20f84576-8347-4b5a-b084-17f248dba057\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.382838 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-log-httpd\") pod \"20f84576-8347-4b5a-b084-17f248dba057\" (UID: \"20f84576-8347-4b5a-b084-17f248dba057\") " Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.384051 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "20f84576-8347-4b5a-b084-17f248dba057" (UID: "20f84576-8347-4b5a-b084-17f248dba057"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.384171 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f","Type":"ContainerStarted","Data":"13f3f152dac9edae9ea4638a3a8d8a972d428663034fabf17665286ff2611f13"} Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.384968 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "20f84576-8347-4b5a-b084-17f248dba057" (UID: "20f84576-8347-4b5a-b084-17f248dba057"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.389440 4804 generic.go:334] "Generic (PLEG): container finished" podID="20f84576-8347-4b5a-b084-17f248dba057" containerID="42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b" exitCode=0 Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.389532 4804 generic.go:334] "Generic (PLEG): container finished" podID="20f84576-8347-4b5a-b084-17f248dba057" containerID="286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b" exitCode=2 Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.389551 4804 generic.go:334] "Generic (PLEG): container finished" podID="20f84576-8347-4b5a-b084-17f248dba057" containerID="82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29" exitCode=0 Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.389562 4804 generic.go:334] "Generic (PLEG): container finished" podID="20f84576-8347-4b5a-b084-17f248dba057" containerID="92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d" exitCode=0 Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.389620 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.389624 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f84576-8347-4b5a-b084-17f248dba057","Type":"ContainerDied","Data":"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b"} Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.389678 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f84576-8347-4b5a-b084-17f248dba057","Type":"ContainerDied","Data":"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b"} Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.389693 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f84576-8347-4b5a-b084-17f248dba057","Type":"ContainerDied","Data":"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29"} Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.389703 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f84576-8347-4b5a-b084-17f248dba057","Type":"ContainerDied","Data":"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d"} Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.389713 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"20f84576-8347-4b5a-b084-17f248dba057","Type":"ContainerDied","Data":"d41d4c7be9d35074e4d66f189d7ceeb0f8e689b845892ee568c44e96679d5f03"} Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.389729 4804 scope.go:117] "RemoveContainer" containerID="42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.401093 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5198da96-d6b6-4b80-bb93-838dff10730e","Type":"ContainerStarted","Data":"ea5cc70522b8b244db30a0a1dd5bc4353ad8899e579dd2e9b1384915ac35e91e"} Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.403713 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-scripts" (OuterVolumeSpecName: "scripts") pod "20f84576-8347-4b5a-b084-17f248dba057" (UID: "20f84576-8347-4b5a-b084-17f248dba057"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.404983 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20f84576-8347-4b5a-b084-17f248dba057-kube-api-access-dhx89" (OuterVolumeSpecName: "kube-api-access-dhx89") pod "20f84576-8347-4b5a-b084-17f248dba057" (UID: "20f84576-8347-4b5a-b084-17f248dba057"). InnerVolumeSpecName "kube-api-access-dhx89". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.444905 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.444871826 podStartE2EDuration="4.444871826s" podCreationTimestamp="2026-01-28 11:42:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:58.437490764 +0000 UTC m=+1254.232370748" watchObservedRunningTime="2026-01-28 11:42:58.444871826 +0000 UTC m=+1254.239751800" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.487619 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dhx89\" (UniqueName: \"kubernetes.io/projected/20f84576-8347-4b5a-b084-17f248dba057-kube-api-access-dhx89\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.487648 4804 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.487657 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.487665 4804 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/20f84576-8347-4b5a-b084-17f248dba057-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.503121 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "20f84576-8347-4b5a-b084-17f248dba057" (UID: "20f84576-8347-4b5a-b084-17f248dba057"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.527966 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "20f84576-8347-4b5a-b084-17f248dba057" (UID: "20f84576-8347-4b5a-b084-17f248dba057"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.549194 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-config-data" (OuterVolumeSpecName: "config-data") pod "20f84576-8347-4b5a-b084-17f248dba057" (UID: "20f84576-8347-4b5a-b084-17f248dba057"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.589688 4804 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.589729 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.589742 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/20f84576-8347-4b5a-b084-17f248dba057-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.621017 4804 scope.go:117] "RemoveContainer" containerID="286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.640248 4804 scope.go:117] "RemoveContainer" containerID="82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.663510 4804 scope.go:117] "RemoveContainer" containerID="92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.693758 4804 scope.go:117] "RemoveContainer" containerID="42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b" Jan 28 11:42:58 crc kubenswrapper[4804]: E0128 11:42:58.694302 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b\": container with ID starting with 42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b not found: ID does not exist" containerID="42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.694334 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b"} err="failed to get container status \"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b\": rpc error: code = NotFound desc = could not find container \"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b\": container with ID starting with 42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.694354 4804 scope.go:117] "RemoveContainer" containerID="286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b" Jan 28 11:42:58 crc kubenswrapper[4804]: E0128 11:42:58.694965 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b\": container with ID starting with 286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b not found: ID does not exist" containerID="286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.694994 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b"} err="failed to get container status \"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b\": rpc error: code = NotFound desc = could not find container \"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b\": container with ID starting with 286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.695011 4804 scope.go:117] "RemoveContainer" containerID="82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29" Jan 28 11:42:58 crc kubenswrapper[4804]: E0128 11:42:58.695237 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29\": container with ID starting with 82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29 not found: ID does not exist" containerID="82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.695260 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29"} err="failed to get container status \"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29\": rpc error: code = NotFound desc = could not find container \"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29\": container with ID starting with 82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29 not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.695273 4804 scope.go:117] "RemoveContainer" containerID="92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d" Jan 28 11:42:58 crc kubenswrapper[4804]: E0128 11:42:58.695537 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d\": container with ID starting with 92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d not found: ID does not exist" containerID="92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.695578 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d"} err="failed to get container status \"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d\": rpc error: code = NotFound desc = could not find container \"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d\": container with ID starting with 92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.695607 4804 scope.go:117] "RemoveContainer" containerID="42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.695853 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b"} err="failed to get container status \"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b\": rpc error: code = NotFound desc = could not find container \"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b\": container with ID starting with 42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.695869 4804 scope.go:117] "RemoveContainer" containerID="286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.696118 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b"} err="failed to get container status \"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b\": rpc error: code = NotFound desc = could not find container \"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b\": container with ID starting with 286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.696141 4804 scope.go:117] "RemoveContainer" containerID="82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.696292 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29"} err="failed to get container status \"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29\": rpc error: code = NotFound desc = could not find container \"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29\": container with ID starting with 82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29 not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.696314 4804 scope.go:117] "RemoveContainer" containerID="92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.696487 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d"} err="failed to get container status \"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d\": rpc error: code = NotFound desc = could not find container \"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d\": container with ID starting with 92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.696504 4804 scope.go:117] "RemoveContainer" containerID="42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.696765 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b"} err="failed to get container status \"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b\": rpc error: code = NotFound desc = could not find container \"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b\": container with ID starting with 42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.696786 4804 scope.go:117] "RemoveContainer" containerID="286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.697072 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b"} err="failed to get container status \"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b\": rpc error: code = NotFound desc = could not find container \"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b\": container with ID starting with 286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.697091 4804 scope.go:117] "RemoveContainer" containerID="82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.697302 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29"} err="failed to get container status \"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29\": rpc error: code = NotFound desc = could not find container \"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29\": container with ID starting with 82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29 not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.697320 4804 scope.go:117] "RemoveContainer" containerID="92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.697740 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d"} err="failed to get container status \"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d\": rpc error: code = NotFound desc = could not find container \"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d\": container with ID starting with 92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.697762 4804 scope.go:117] "RemoveContainer" containerID="42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.698421 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b"} err="failed to get container status \"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b\": rpc error: code = NotFound desc = could not find container \"42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b\": container with ID starting with 42273e655c58ff788a0283c83c331c3ca975af1e3c3dbccef07fe172d325cb1b not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.698441 4804 scope.go:117] "RemoveContainer" containerID="286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.698662 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b"} err="failed to get container status \"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b\": rpc error: code = NotFound desc = could not find container \"286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b\": container with ID starting with 286ff513d133b5e91cd316e03cf7cebf34a0ccd509c6f81e108a66e4607eb30b not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.698684 4804 scope.go:117] "RemoveContainer" containerID="82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.698839 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29"} err="failed to get container status \"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29\": rpc error: code = NotFound desc = could not find container \"82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29\": container with ID starting with 82e170da594e18f18f5418ef41500e20285e107f49ac5c00589c9feafa4def29 not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.698859 4804 scope.go:117] "RemoveContainer" containerID="92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.699017 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d"} err="failed to get container status \"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d\": rpc error: code = NotFound desc = could not find container \"92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d\": container with ID starting with 92dc4b5d18c052e3c1ea0bc1db8a3eb8de807e15c50a33e208c1faa509dcbb5d not found: ID does not exist" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.729876 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.738563 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.746932 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:58 crc kubenswrapper[4804]: E0128 11:42:58.747340 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="ceilometer-notification-agent" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.747359 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="ceilometer-notification-agent" Jan 28 11:42:58 crc kubenswrapper[4804]: E0128 11:42:58.747380 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="sg-core" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.747386 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="sg-core" Jan 28 11:42:58 crc kubenswrapper[4804]: E0128 11:42:58.747396 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="proxy-httpd" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.747402 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="proxy-httpd" Jan 28 11:42:58 crc kubenswrapper[4804]: E0128 11:42:58.747413 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="ceilometer-central-agent" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.747418 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="ceilometer-central-agent" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.747576 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="ceilometer-central-agent" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.747590 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="sg-core" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.747597 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="proxy-httpd" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.747609 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="20f84576-8347-4b5a-b084-17f248dba057" containerName="ceilometer-notification-agent" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.749088 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.752478 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.752825 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.772762 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.895346 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-scripts\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.895719 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.895755 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.895861 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-log-httpd\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.895984 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-config-data\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.896152 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9l79m\" (UniqueName: \"kubernetes.io/projected/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-kube-api-access-9l79m\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.896235 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-run-httpd\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.924629 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20f84576-8347-4b5a-b084-17f248dba057" path="/var/lib/kubelet/pods/20f84576-8347-4b5a-b084-17f248dba057/volumes" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.998118 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-log-httpd\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.998179 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-config-data\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.998239 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9l79m\" (UniqueName: \"kubernetes.io/projected/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-kube-api-access-9l79m\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.998270 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-run-httpd\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.998356 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-scripts\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.998376 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.998400 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.998777 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-log-httpd\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:58 crc kubenswrapper[4804]: I0128 11:42:58.999036 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-run-httpd\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:59 crc kubenswrapper[4804]: I0128 11:42:59.002217 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:59 crc kubenswrapper[4804]: I0128 11:42:59.003937 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-config-data\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:59 crc kubenswrapper[4804]: I0128 11:42:59.004085 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-scripts\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:59 crc kubenswrapper[4804]: I0128 11:42:59.018601 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:59 crc kubenswrapper[4804]: I0128 11:42:59.021292 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9l79m\" (UniqueName: \"kubernetes.io/projected/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-kube-api-access-9l79m\") pod \"ceilometer-0\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " pod="openstack/ceilometer-0" Jan 28 11:42:59 crc kubenswrapper[4804]: I0128 11:42:59.067428 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:42:59 crc kubenswrapper[4804]: I0128 11:42:59.413275 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f","Type":"ContainerStarted","Data":"9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63"} Jan 28 11:42:59 crc kubenswrapper[4804]: I0128 11:42:59.435137 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=3.435114034 podStartE2EDuration="3.435114034s" podCreationTimestamp="2026-01-28 11:42:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:42:59.431029066 +0000 UTC m=+1255.225909060" watchObservedRunningTime="2026-01-28 11:42:59.435114034 +0000 UTC m=+1255.229994018" Jan 28 11:42:59 crc kubenswrapper[4804]: I0128 11:42:59.564406 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:42:59 crc kubenswrapper[4804]: W0128 11:42:59.565531 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod82e7f566_1434_46e9_b3d3_fffbdb60a6bf.slice/crio-4c8aa2f7bd63f7b48ef2cb6b2c60c420e1195c5a32dd368ab0d14ffa784ddcab WatchSource:0}: Error finding container 4c8aa2f7bd63f7b48ef2cb6b2c60c420e1195c5a32dd368ab0d14ffa784ddcab: Status 404 returned error can't find the container with id 4c8aa2f7bd63f7b48ef2cb6b2c60c420e1195c5a32dd368ab0d14ffa784ddcab Jan 28 11:42:59 crc kubenswrapper[4804]: I0128 11:42:59.866310 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.425770 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82e7f566-1434-46e9-b3d3-fffbdb60a6bf","Type":"ContainerStarted","Data":"19045e27d44b17964f020e21fca16ea2ab3e1b35f66cb4748132b6158c4a9b1d"} Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.426609 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82e7f566-1434-46e9-b3d3-fffbdb60a6bf","Type":"ContainerStarted","Data":"4c8aa2f7bd63f7b48ef2cb6b2c60c420e1195c5a32dd368ab0d14ffa784ddcab"} Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.696196 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qbth2"] Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.697783 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.699596 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.700779 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.701189 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-kcpcp" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.712270 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qbth2"] Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.741081 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.741411 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-scripts\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.741528 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-config-data\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.741650 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv5np\" (UniqueName: \"kubernetes.io/projected/359ecb47-f044-4273-8589-c0ceedb367b5-kube-api-access-nv5np\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.843563 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.843672 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-scripts\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.843725 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-config-data\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.843782 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nv5np\" (UniqueName: \"kubernetes.io/projected/359ecb47-f044-4273-8589-c0ceedb367b5-kube-api-access-nv5np\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.849614 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-scripts\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.850433 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.855599 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-config-data\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:00 crc kubenswrapper[4804]: I0128 11:43:00.868135 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nv5np\" (UniqueName: \"kubernetes.io/projected/359ecb47-f044-4273-8589-c0ceedb367b5-kube-api-access-nv5np\") pod \"nova-cell0-conductor-db-sync-qbth2\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:01 crc kubenswrapper[4804]: I0128 11:43:01.027728 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:01 crc kubenswrapper[4804]: I0128 11:43:01.503956 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qbth2"] Jan 28 11:43:01 crc kubenswrapper[4804]: W0128 11:43:01.505991 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod359ecb47_f044_4273_8589_c0ceedb367b5.slice/crio-daf675501f00de9fd7512d405b357c4cc69caeb3e4856b26378f4e1945fe6e76 WatchSource:0}: Error finding container daf675501f00de9fd7512d405b357c4cc69caeb3e4856b26378f4e1945fe6e76: Status 404 returned error can't find the container with id daf675501f00de9fd7512d405b357c4cc69caeb3e4856b26378f4e1945fe6e76 Jan 28 11:43:02 crc kubenswrapper[4804]: I0128 11:43:02.445688 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82e7f566-1434-46e9-b3d3-fffbdb60a6bf","Type":"ContainerStarted","Data":"d53733edf16324dce7a6f17448b9b02e2419ebeed21a8b78c5a567d2b9c843f2"} Jan 28 11:43:02 crc kubenswrapper[4804]: I0128 11:43:02.448545 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qbth2" event={"ID":"359ecb47-f044-4273-8589-c0ceedb367b5","Type":"ContainerStarted","Data":"daf675501f00de9fd7512d405b357c4cc69caeb3e4856b26378f4e1945fe6e76"} Jan 28 11:43:03 crc kubenswrapper[4804]: I0128 11:43:03.466476 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82e7f566-1434-46e9-b3d3-fffbdb60a6bf","Type":"ContainerStarted","Data":"5d1b190c88c3478c96dd782d455a0cd8073b406a803c4a1ba0f960df1c19d2fa"} Jan 28 11:43:04 crc kubenswrapper[4804]: I0128 11:43:04.458195 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 11:43:04 crc kubenswrapper[4804]: I0128 11:43:04.460550 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Jan 28 11:43:04 crc kubenswrapper[4804]: I0128 11:43:04.484690 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="ceilometer-central-agent" containerID="cri-o://19045e27d44b17964f020e21fca16ea2ab3e1b35f66cb4748132b6158c4a9b1d" gracePeriod=30 Jan 28 11:43:04 crc kubenswrapper[4804]: I0128 11:43:04.484997 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82e7f566-1434-46e9-b3d3-fffbdb60a6bf","Type":"ContainerStarted","Data":"bb181d08668e0c371892ffb81a7cdc4e448883ef0b17e80a4a88e370ec8c31b0"} Jan 28 11:43:04 crc kubenswrapper[4804]: I0128 11:43:04.485042 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 11:43:04 crc kubenswrapper[4804]: I0128 11:43:04.485381 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="proxy-httpd" containerID="cri-o://bb181d08668e0c371892ffb81a7cdc4e448883ef0b17e80a4a88e370ec8c31b0" gracePeriod=30 Jan 28 11:43:04 crc kubenswrapper[4804]: I0128 11:43:04.485442 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="sg-core" containerID="cri-o://5d1b190c88c3478c96dd782d455a0cd8073b406a803c4a1ba0f960df1c19d2fa" gracePeriod=30 Jan 28 11:43:04 crc kubenswrapper[4804]: I0128 11:43:04.485481 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="ceilometer-notification-agent" containerID="cri-o://d53733edf16324dce7a6f17448b9b02e2419ebeed21a8b78c5a567d2b9c843f2" gracePeriod=30 Jan 28 11:43:04 crc kubenswrapper[4804]: I0128 11:43:04.512659 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.035612547 podStartE2EDuration="6.512636588s" podCreationTimestamp="2026-01-28 11:42:58 +0000 UTC" firstStartedPulling="2026-01-28 11:42:59.567353205 +0000 UTC m=+1255.362233189" lastFinishedPulling="2026-01-28 11:43:04.044377246 +0000 UTC m=+1259.839257230" observedRunningTime="2026-01-28 11:43:04.505426813 +0000 UTC m=+1260.300306797" watchObservedRunningTime="2026-01-28 11:43:04.512636588 +0000 UTC m=+1260.307516572" Jan 28 11:43:04 crc kubenswrapper[4804]: I0128 11:43:04.517929 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 11:43:04 crc kubenswrapper[4804]: I0128 11:43:04.525600 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Jan 28 11:43:05 crc kubenswrapper[4804]: I0128 11:43:05.504271 4804 generic.go:334] "Generic (PLEG): container finished" podID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerID="bb181d08668e0c371892ffb81a7cdc4e448883ef0b17e80a4a88e370ec8c31b0" exitCode=0 Jan 28 11:43:05 crc kubenswrapper[4804]: I0128 11:43:05.504567 4804 generic.go:334] "Generic (PLEG): container finished" podID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerID="5d1b190c88c3478c96dd782d455a0cd8073b406a803c4a1ba0f960df1c19d2fa" exitCode=2 Jan 28 11:43:05 crc kubenswrapper[4804]: I0128 11:43:05.504579 4804 generic.go:334] "Generic (PLEG): container finished" podID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerID="d53733edf16324dce7a6f17448b9b02e2419ebeed21a8b78c5a567d2b9c843f2" exitCode=0 Jan 28 11:43:05 crc kubenswrapper[4804]: I0128 11:43:05.504345 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82e7f566-1434-46e9-b3d3-fffbdb60a6bf","Type":"ContainerDied","Data":"bb181d08668e0c371892ffb81a7cdc4e448883ef0b17e80a4a88e370ec8c31b0"} Jan 28 11:43:05 crc kubenswrapper[4804]: I0128 11:43:05.504671 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82e7f566-1434-46e9-b3d3-fffbdb60a6bf","Type":"ContainerDied","Data":"5d1b190c88c3478c96dd782d455a0cd8073b406a803c4a1ba0f960df1c19d2fa"} Jan 28 11:43:05 crc kubenswrapper[4804]: I0128 11:43:05.504683 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82e7f566-1434-46e9-b3d3-fffbdb60a6bf","Type":"ContainerDied","Data":"d53733edf16324dce7a6f17448b9b02e2419ebeed21a8b78c5a567d2b9c843f2"} Jan 28 11:43:05 crc kubenswrapper[4804]: I0128 11:43:05.505895 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 11:43:05 crc kubenswrapper[4804]: I0128 11:43:05.505928 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Jan 28 11:43:06 crc kubenswrapper[4804]: I0128 11:43:06.813697 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 11:43:06 crc kubenswrapper[4804]: I0128 11:43:06.813755 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Jan 28 11:43:06 crc kubenswrapper[4804]: I0128 11:43:06.844691 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 11:43:06 crc kubenswrapper[4804]: I0128 11:43:06.866941 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Jan 28 11:43:07 crc kubenswrapper[4804]: I0128 11:43:07.683807 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 11:43:07 crc kubenswrapper[4804]: I0128 11:43:07.684969 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Jan 28 11:43:07 crc kubenswrapper[4804]: I0128 11:43:07.697345 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 11:43:07 crc kubenswrapper[4804]: I0128 11:43:07.697386 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Jan 28 11:43:09 crc kubenswrapper[4804]: I0128 11:43:09.730743 4804 generic.go:334] "Generic (PLEG): container finished" podID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerID="19045e27d44b17964f020e21fca16ea2ab3e1b35f66cb4748132b6158c4a9b1d" exitCode=0 Jan 28 11:43:09 crc kubenswrapper[4804]: I0128 11:43:09.730833 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82e7f566-1434-46e9-b3d3-fffbdb60a6bf","Type":"ContainerDied","Data":"19045e27d44b17964f020e21fca16ea2ab3e1b35f66cb4748132b6158c4a9b1d"} Jan 28 11:43:09 crc kubenswrapper[4804]: I0128 11:43:09.731130 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:43:09 crc kubenswrapper[4804]: I0128 11:43:09.731139 4804 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.151538 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.183271 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.455120 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.522529 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-run-httpd\") pod \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.522584 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-log-httpd\") pod \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.522616 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-combined-ca-bundle\") pod \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.522772 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-config-data\") pod \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.522854 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-sg-core-conf-yaml\") pod \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.522897 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-scripts\") pod \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.522919 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9l79m\" (UniqueName: \"kubernetes.io/projected/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-kube-api-access-9l79m\") pod \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\" (UID: \"82e7f566-1434-46e9-b3d3-fffbdb60a6bf\") " Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.524150 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "82e7f566-1434-46e9-b3d3-fffbdb60a6bf" (UID: "82e7f566-1434-46e9-b3d3-fffbdb60a6bf"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.525331 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "82e7f566-1434-46e9-b3d3-fffbdb60a6bf" (UID: "82e7f566-1434-46e9-b3d3-fffbdb60a6bf"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.559870 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-scripts" (OuterVolumeSpecName: "scripts") pod "82e7f566-1434-46e9-b3d3-fffbdb60a6bf" (UID: "82e7f566-1434-46e9-b3d3-fffbdb60a6bf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.570148 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-kube-api-access-9l79m" (OuterVolumeSpecName: "kube-api-access-9l79m") pod "82e7f566-1434-46e9-b3d3-fffbdb60a6bf" (UID: "82e7f566-1434-46e9-b3d3-fffbdb60a6bf"). InnerVolumeSpecName "kube-api-access-9l79m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.628431 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.628501 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9l79m\" (UniqueName: \"kubernetes.io/projected/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-kube-api-access-9l79m\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.628513 4804 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.628522 4804 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.657112 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "82e7f566-1434-46e9-b3d3-fffbdb60a6bf" (UID: "82e7f566-1434-46e9-b3d3-fffbdb60a6bf"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.733484 4804 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.744931 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.744948 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"82e7f566-1434-46e9-b3d3-fffbdb60a6bf","Type":"ContainerDied","Data":"4c8aa2f7bd63f7b48ef2cb6b2c60c420e1195c5a32dd368ab0d14ffa784ddcab"} Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.745028 4804 scope.go:117] "RemoveContainer" containerID="bb181d08668e0c371892ffb81a7cdc4e448883ef0b17e80a4a88e370ec8c31b0" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.753825 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82e7f566-1434-46e9-b3d3-fffbdb60a6bf" (UID: "82e7f566-1434-46e9-b3d3-fffbdb60a6bf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.770225 4804 scope.go:117] "RemoveContainer" containerID="5d1b190c88c3478c96dd782d455a0cd8073b406a803c4a1ba0f960df1c19d2fa" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.773041 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-config-data" (OuterVolumeSpecName: "config-data") pod "82e7f566-1434-46e9-b3d3-fffbdb60a6bf" (UID: "82e7f566-1434-46e9-b3d3-fffbdb60a6bf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.792093 4804 scope.go:117] "RemoveContainer" containerID="d53733edf16324dce7a6f17448b9b02e2419ebeed21a8b78c5a567d2b9c843f2" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.835074 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.835432 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82e7f566-1434-46e9-b3d3-fffbdb60a6bf-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:10 crc kubenswrapper[4804]: I0128 11:43:10.890201 4804 scope.go:117] "RemoveContainer" containerID="19045e27d44b17964f020e21fca16ea2ab3e1b35f66cb4748132b6158c4a9b1d" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.082599 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.095446 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.118140 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:11 crc kubenswrapper[4804]: E0128 11:43:11.118605 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="proxy-httpd" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.118627 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="proxy-httpd" Jan 28 11:43:11 crc kubenswrapper[4804]: E0128 11:43:11.118666 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="ceilometer-notification-agent" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.118675 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="ceilometer-notification-agent" Jan 28 11:43:11 crc kubenswrapper[4804]: E0128 11:43:11.118698 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="ceilometer-central-agent" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.118706 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="ceilometer-central-agent" Jan 28 11:43:11 crc kubenswrapper[4804]: E0128 11:43:11.118717 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="sg-core" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.118724 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="sg-core" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.118967 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="ceilometer-notification-agent" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.118987 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="ceilometer-central-agent" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.118999 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="proxy-httpd" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.119019 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" containerName="sg-core" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.120698 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.132558 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.132779 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.148909 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.242590 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.242639 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-config-data\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.242669 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8kx4\" (UniqueName: \"kubernetes.io/projected/efacd8f7-ea6c-47c4-a463-44d8138b8902-kube-api-access-q8kx4\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.242911 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-run-httpd\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.242952 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-log-httpd\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.243006 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.243245 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-scripts\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.345341 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-run-httpd\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.345384 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-log-httpd\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.345414 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.345512 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-scripts\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.345546 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.345576 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-config-data\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.345597 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q8kx4\" (UniqueName: \"kubernetes.io/projected/efacd8f7-ea6c-47c4-a463-44d8138b8902-kube-api-access-q8kx4\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.346026 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-log-httpd\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.346204 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-run-httpd\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.350552 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.351083 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-config-data\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.351558 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.352497 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-scripts\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.367683 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q8kx4\" (UniqueName: \"kubernetes.io/projected/efacd8f7-ea6c-47c4-a463-44d8138b8902-kube-api-access-q8kx4\") pod \"ceilometer-0\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.451160 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.768265 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qbth2" event={"ID":"359ecb47-f044-4273-8589-c0ceedb367b5","Type":"ContainerStarted","Data":"4826b18cb81abb4e1ff9ad1e5f7d66bf9704f751e4eaecf9575b178485d52c14"} Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.794735 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-qbth2" podStartSLOduration=2.893949192 podStartE2EDuration="11.794715745s" podCreationTimestamp="2026-01-28 11:43:00 +0000 UTC" firstStartedPulling="2026-01-28 11:43:01.508593662 +0000 UTC m=+1257.303473646" lastFinishedPulling="2026-01-28 11:43:10.409360215 +0000 UTC m=+1266.204240199" observedRunningTime="2026-01-28 11:43:11.783968429 +0000 UTC m=+1267.578848433" watchObservedRunningTime="2026-01-28 11:43:11.794715745 +0000 UTC m=+1267.589595729" Jan 28 11:43:11 crc kubenswrapper[4804]: I0128 11:43:11.979755 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:12 crc kubenswrapper[4804]: I0128 11:43:12.788568 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efacd8f7-ea6c-47c4-a463-44d8138b8902","Type":"ContainerStarted","Data":"5dff59600756e03acb4484ec19d69924e551231377219d020efce0a7d85e6522"} Jan 28 11:43:12 crc kubenswrapper[4804]: I0128 11:43:12.928454 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82e7f566-1434-46e9-b3d3-fffbdb60a6bf" path="/var/lib/kubelet/pods/82e7f566-1434-46e9-b3d3-fffbdb60a6bf/volumes" Jan 28 11:43:13 crc kubenswrapper[4804]: I0128 11:43:13.789070 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:13 crc kubenswrapper[4804]: I0128 11:43:13.796644 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efacd8f7-ea6c-47c4-a463-44d8138b8902","Type":"ContainerStarted","Data":"d9c8c724413ac4da024c5971e62e73d509fc37fedea6226c7dc51b0b5e8d9d15"} Jan 28 11:43:14 crc kubenswrapper[4804]: I0128 11:43:14.806712 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efacd8f7-ea6c-47c4-a463-44d8138b8902","Type":"ContainerStarted","Data":"ffb92a8922a160357cb5ca0cff786a7b1a6a637aaf11a872f0bbfb95cde65e15"} Jan 28 11:43:15 crc kubenswrapper[4804]: I0128 11:43:15.817020 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efacd8f7-ea6c-47c4-a463-44d8138b8902","Type":"ContainerStarted","Data":"11d4b2ad3b6d452f68fb5932d6ce6d513a682096193428fbe90102adbcab8bc7"} Jan 28 11:43:17 crc kubenswrapper[4804]: I0128 11:43:17.839704 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efacd8f7-ea6c-47c4-a463-44d8138b8902","Type":"ContainerStarted","Data":"333d29b1401d9119492f1216480046cd194f566dc44787a465bad0f59fa284c0"} Jan 28 11:43:17 crc kubenswrapper[4804]: I0128 11:43:17.840113 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="ceilometer-central-agent" containerID="cri-o://d9c8c724413ac4da024c5971e62e73d509fc37fedea6226c7dc51b0b5e8d9d15" gracePeriod=30 Jan 28 11:43:17 crc kubenswrapper[4804]: I0128 11:43:17.840417 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 11:43:17 crc kubenswrapper[4804]: I0128 11:43:17.840660 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="proxy-httpd" containerID="cri-o://333d29b1401d9119492f1216480046cd194f566dc44787a465bad0f59fa284c0" gracePeriod=30 Jan 28 11:43:17 crc kubenswrapper[4804]: I0128 11:43:17.840705 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="sg-core" containerID="cri-o://11d4b2ad3b6d452f68fb5932d6ce6d513a682096193428fbe90102adbcab8bc7" gracePeriod=30 Jan 28 11:43:17 crc kubenswrapper[4804]: I0128 11:43:17.840750 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="ceilometer-notification-agent" containerID="cri-o://ffb92a8922a160357cb5ca0cff786a7b1a6a637aaf11a872f0bbfb95cde65e15" gracePeriod=30 Jan 28 11:43:18 crc kubenswrapper[4804]: I0128 11:43:18.850385 4804 generic.go:334] "Generic (PLEG): container finished" podID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerID="333d29b1401d9119492f1216480046cd194f566dc44787a465bad0f59fa284c0" exitCode=0 Jan 28 11:43:18 crc kubenswrapper[4804]: I0128 11:43:18.851832 4804 generic.go:334] "Generic (PLEG): container finished" podID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerID="11d4b2ad3b6d452f68fb5932d6ce6d513a682096193428fbe90102adbcab8bc7" exitCode=2 Jan 28 11:43:18 crc kubenswrapper[4804]: I0128 11:43:18.851998 4804 generic.go:334] "Generic (PLEG): container finished" podID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerID="ffb92a8922a160357cb5ca0cff786a7b1a6a637aaf11a872f0bbfb95cde65e15" exitCode=0 Jan 28 11:43:18 crc kubenswrapper[4804]: I0128 11:43:18.850564 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efacd8f7-ea6c-47c4-a463-44d8138b8902","Type":"ContainerDied","Data":"333d29b1401d9119492f1216480046cd194f566dc44787a465bad0f59fa284c0"} Jan 28 11:43:18 crc kubenswrapper[4804]: I0128 11:43:18.852253 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efacd8f7-ea6c-47c4-a463-44d8138b8902","Type":"ContainerDied","Data":"11d4b2ad3b6d452f68fb5932d6ce6d513a682096193428fbe90102adbcab8bc7"} Jan 28 11:43:18 crc kubenswrapper[4804]: I0128 11:43:18.852344 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efacd8f7-ea6c-47c4-a463-44d8138b8902","Type":"ContainerDied","Data":"ffb92a8922a160357cb5ca0cff786a7b1a6a637aaf11a872f0bbfb95cde65e15"} Jan 28 11:43:23 crc kubenswrapper[4804]: I0128 11:43:23.904120 4804 generic.go:334] "Generic (PLEG): container finished" podID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerID="d9c8c724413ac4da024c5971e62e73d509fc37fedea6226c7dc51b0b5e8d9d15" exitCode=0 Jan 28 11:43:23 crc kubenswrapper[4804]: I0128 11:43:23.904184 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efacd8f7-ea6c-47c4-a463-44d8138b8902","Type":"ContainerDied","Data":"d9c8c724413ac4da024c5971e62e73d509fc37fedea6226c7dc51b0b5e8d9d15"} Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.164265 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.221452 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-config-data\") pod \"efacd8f7-ea6c-47c4-a463-44d8138b8902\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.221640 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-run-httpd\") pod \"efacd8f7-ea6c-47c4-a463-44d8138b8902\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.222078 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-scripts\") pod \"efacd8f7-ea6c-47c4-a463-44d8138b8902\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.222126 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q8kx4\" (UniqueName: \"kubernetes.io/projected/efacd8f7-ea6c-47c4-a463-44d8138b8902-kube-api-access-q8kx4\") pod \"efacd8f7-ea6c-47c4-a463-44d8138b8902\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.222152 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-combined-ca-bundle\") pod \"efacd8f7-ea6c-47c4-a463-44d8138b8902\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.222190 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "efacd8f7-ea6c-47c4-a463-44d8138b8902" (UID: "efacd8f7-ea6c-47c4-a463-44d8138b8902"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.222215 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-sg-core-conf-yaml\") pod \"efacd8f7-ea6c-47c4-a463-44d8138b8902\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.222457 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-log-httpd\") pod \"efacd8f7-ea6c-47c4-a463-44d8138b8902\" (UID: \"efacd8f7-ea6c-47c4-a463-44d8138b8902\") " Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.222951 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "efacd8f7-ea6c-47c4-a463-44d8138b8902" (UID: "efacd8f7-ea6c-47c4-a463-44d8138b8902"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.223784 4804 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.224032 4804 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/efacd8f7-ea6c-47c4-a463-44d8138b8902-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.228460 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efacd8f7-ea6c-47c4-a463-44d8138b8902-kube-api-access-q8kx4" (OuterVolumeSpecName: "kube-api-access-q8kx4") pod "efacd8f7-ea6c-47c4-a463-44d8138b8902" (UID: "efacd8f7-ea6c-47c4-a463-44d8138b8902"). InnerVolumeSpecName "kube-api-access-q8kx4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.230078 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-scripts" (OuterVolumeSpecName: "scripts") pod "efacd8f7-ea6c-47c4-a463-44d8138b8902" (UID: "efacd8f7-ea6c-47c4-a463-44d8138b8902"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.258956 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "efacd8f7-ea6c-47c4-a463-44d8138b8902" (UID: "efacd8f7-ea6c-47c4-a463-44d8138b8902"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.306744 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "efacd8f7-ea6c-47c4-a463-44d8138b8902" (UID: "efacd8f7-ea6c-47c4-a463-44d8138b8902"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.316299 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-config-data" (OuterVolumeSpecName: "config-data") pod "efacd8f7-ea6c-47c4-a463-44d8138b8902" (UID: "efacd8f7-ea6c-47c4-a463-44d8138b8902"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.325145 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.325168 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.325214 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.325226 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q8kx4\" (UniqueName: \"kubernetes.io/projected/efacd8f7-ea6c-47c4-a463-44d8138b8902-kube-api-access-q8kx4\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.325234 4804 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/efacd8f7-ea6c-47c4-a463-44d8138b8902-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.922376 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.924983 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"efacd8f7-ea6c-47c4-a463-44d8138b8902","Type":"ContainerDied","Data":"5dff59600756e03acb4484ec19d69924e551231377219d020efce0a7d85e6522"} Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.925028 4804 scope.go:117] "RemoveContainer" containerID="333d29b1401d9119492f1216480046cd194f566dc44787a465bad0f59fa284c0" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.953789 4804 scope.go:117] "RemoveContainer" containerID="11d4b2ad3b6d452f68fb5932d6ce6d513a682096193428fbe90102adbcab8bc7" Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.974760 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:24 crc kubenswrapper[4804]: I0128 11:43:24.984419 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.018416 4804 scope.go:117] "RemoveContainer" containerID="ffb92a8922a160357cb5ca0cff786a7b1a6a637aaf11a872f0bbfb95cde65e15" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.068057 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:25 crc kubenswrapper[4804]: E0128 11:43:25.068539 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="sg-core" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.068565 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="sg-core" Jan 28 11:43:25 crc kubenswrapper[4804]: E0128 11:43:25.068583 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="ceilometer-notification-agent" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.068593 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="ceilometer-notification-agent" Jan 28 11:43:25 crc kubenswrapper[4804]: E0128 11:43:25.069436 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="proxy-httpd" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.069463 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="proxy-httpd" Jan 28 11:43:25 crc kubenswrapper[4804]: E0128 11:43:25.069476 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="ceilometer-central-agent" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.069486 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="ceilometer-central-agent" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.069818 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="proxy-httpd" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.070070 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="ceilometer-notification-agent" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.070089 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="ceilometer-central-agent" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.070101 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" containerName="sg-core" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.073129 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.079003 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.079194 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.080639 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.092053 4804 scope.go:117] "RemoveContainer" containerID="d9c8c724413ac4da024c5971e62e73d509fc37fedea6226c7dc51b0b5e8d9d15" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.147534 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8vc8\" (UniqueName: \"kubernetes.io/projected/3af7b9f3-ab28-4971-9cde-112e8127e7ed-kube-api-access-p8vc8\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.147585 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-log-httpd\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.147618 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.147634 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-scripts\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.147765 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-run-httpd\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.147783 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-config-data\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.147917 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.250240 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.250349 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8vc8\" (UniqueName: \"kubernetes.io/projected/3af7b9f3-ab28-4971-9cde-112e8127e7ed-kube-api-access-p8vc8\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.250388 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-log-httpd\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.250445 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.250466 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-scripts\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.250565 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-run-httpd\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.250609 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-config-data\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.251065 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-log-httpd\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.251178 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-run-httpd\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.255843 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.259508 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-scripts\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.259820 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.260096 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-config-data\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.268827 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8vc8\" (UniqueName: \"kubernetes.io/projected/3af7b9f3-ab28-4971-9cde-112e8127e7ed-kube-api-access-p8vc8\") pod \"ceilometer-0\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.399943 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.860408 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:25 crc kubenswrapper[4804]: I0128 11:43:25.926625 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af7b9f3-ab28-4971-9cde-112e8127e7ed","Type":"ContainerStarted","Data":"c8765a95fb8f276f5341ac43164dd55a291ff8252543d039befa66bf61350f2c"} Jan 28 11:43:26 crc kubenswrapper[4804]: I0128 11:43:26.925735 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efacd8f7-ea6c-47c4-a463-44d8138b8902" path="/var/lib/kubelet/pods/efacd8f7-ea6c-47c4-a463-44d8138b8902/volumes" Jan 28 11:43:26 crc kubenswrapper[4804]: I0128 11:43:26.935664 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af7b9f3-ab28-4971-9cde-112e8127e7ed","Type":"ContainerStarted","Data":"f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60"} Jan 28 11:43:27 crc kubenswrapper[4804]: I0128 11:43:27.945999 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af7b9f3-ab28-4971-9cde-112e8127e7ed","Type":"ContainerStarted","Data":"2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c"} Jan 28 11:43:28 crc kubenswrapper[4804]: I0128 11:43:28.958823 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af7b9f3-ab28-4971-9cde-112e8127e7ed","Type":"ContainerStarted","Data":"9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916"} Jan 28 11:43:30 crc kubenswrapper[4804]: I0128 11:43:30.551055 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:30 crc kubenswrapper[4804]: I0128 11:43:30.979187 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af7b9f3-ab28-4971-9cde-112e8127e7ed","Type":"ContainerStarted","Data":"ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4"} Jan 28 11:43:30 crc kubenswrapper[4804]: I0128 11:43:30.979748 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 11:43:31 crc kubenswrapper[4804]: I0128 11:43:31.002927 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.997829147 podStartE2EDuration="7.002909229s" podCreationTimestamp="2026-01-28 11:43:24 +0000 UTC" firstStartedPulling="2026-01-28 11:43:25.871685613 +0000 UTC m=+1281.666565597" lastFinishedPulling="2026-01-28 11:43:29.876765695 +0000 UTC m=+1285.671645679" observedRunningTime="2026-01-28 11:43:30.99782023 +0000 UTC m=+1286.792700234" watchObservedRunningTime="2026-01-28 11:43:31.002909229 +0000 UTC m=+1286.797789213" Jan 28 11:43:31 crc kubenswrapper[4804]: I0128 11:43:31.987482 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="ceilometer-central-agent" containerID="cri-o://f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60" gracePeriod=30 Jan 28 11:43:31 crc kubenswrapper[4804]: I0128 11:43:31.987535 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="ceilometer-notification-agent" containerID="cri-o://2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c" gracePeriod=30 Jan 28 11:43:31 crc kubenswrapper[4804]: I0128 11:43:31.987556 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="proxy-httpd" containerID="cri-o://ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4" gracePeriod=30 Jan 28 11:43:31 crc kubenswrapper[4804]: I0128 11:43:31.987535 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="sg-core" containerID="cri-o://9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916" gracePeriod=30 Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.006616 4804 generic.go:334] "Generic (PLEG): container finished" podID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerID="ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4" exitCode=0 Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.006925 4804 generic.go:334] "Generic (PLEG): container finished" podID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerID="9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916" exitCode=2 Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.006935 4804 generic.go:334] "Generic (PLEG): container finished" podID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerID="2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c" exitCode=0 Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.006670 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af7b9f3-ab28-4971-9cde-112e8127e7ed","Type":"ContainerDied","Data":"ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4"} Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.006972 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af7b9f3-ab28-4971-9cde-112e8127e7ed","Type":"ContainerDied","Data":"9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916"} Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.006987 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af7b9f3-ab28-4971-9cde-112e8127e7ed","Type":"ContainerDied","Data":"2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c"} Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.829068 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.937523 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-config-data\") pod \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.937594 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-run-httpd\") pod \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.937615 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-combined-ca-bundle\") pod \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.937666 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-sg-core-conf-yaml\") pod \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.937754 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-scripts\") pod \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.937792 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8vc8\" (UniqueName: \"kubernetes.io/projected/3af7b9f3-ab28-4971-9cde-112e8127e7ed-kube-api-access-p8vc8\") pod \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.937860 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-log-httpd\") pod \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\" (UID: \"3af7b9f3-ab28-4971-9cde-112e8127e7ed\") " Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.938022 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "3af7b9f3-ab28-4971-9cde-112e8127e7ed" (UID: "3af7b9f3-ab28-4971-9cde-112e8127e7ed"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.938593 4804 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.939019 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "3af7b9f3-ab28-4971-9cde-112e8127e7ed" (UID: "3af7b9f3-ab28-4971-9cde-112e8127e7ed"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.952779 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-scripts" (OuterVolumeSpecName: "scripts") pod "3af7b9f3-ab28-4971-9cde-112e8127e7ed" (UID: "3af7b9f3-ab28-4971-9cde-112e8127e7ed"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.956747 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3af7b9f3-ab28-4971-9cde-112e8127e7ed-kube-api-access-p8vc8" (OuterVolumeSpecName: "kube-api-access-p8vc8") pod "3af7b9f3-ab28-4971-9cde-112e8127e7ed" (UID: "3af7b9f3-ab28-4971-9cde-112e8127e7ed"). InnerVolumeSpecName "kube-api-access-p8vc8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:43:33 crc kubenswrapper[4804]: I0128 11:43:33.965092 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "3af7b9f3-ab28-4971-9cde-112e8127e7ed" (UID: "3af7b9f3-ab28-4971-9cde-112e8127e7ed"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.018177 4804 generic.go:334] "Generic (PLEG): container finished" podID="359ecb47-f044-4273-8589-c0ceedb367b5" containerID="4826b18cb81abb4e1ff9ad1e5f7d66bf9704f751e4eaecf9575b178485d52c14" exitCode=0 Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.018250 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qbth2" event={"ID":"359ecb47-f044-4273-8589-c0ceedb367b5","Type":"ContainerDied","Data":"4826b18cb81abb4e1ff9ad1e5f7d66bf9704f751e4eaecf9575b178485d52c14"} Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.029801 4804 generic.go:334] "Generic (PLEG): container finished" podID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerID="f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60" exitCode=0 Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.029850 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af7b9f3-ab28-4971-9cde-112e8127e7ed","Type":"ContainerDied","Data":"f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60"} Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.029901 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"3af7b9f3-ab28-4971-9cde-112e8127e7ed","Type":"ContainerDied","Data":"c8765a95fb8f276f5341ac43164dd55a291ff8252543d039befa66bf61350f2c"} Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.029925 4804 scope.go:117] "RemoveContainer" containerID="ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.030094 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.039548 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-config-data" (OuterVolumeSpecName: "config-data") pod "3af7b9f3-ab28-4971-9cde-112e8127e7ed" (UID: "3af7b9f3-ab28-4971-9cde-112e8127e7ed"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.039946 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.039978 4804 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.039992 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.040005 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8vc8\" (UniqueName: \"kubernetes.io/projected/3af7b9f3-ab28-4971-9cde-112e8127e7ed-kube-api-access-p8vc8\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.040016 4804 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/3af7b9f3-ab28-4971-9cde-112e8127e7ed-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.044945 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3af7b9f3-ab28-4971-9cde-112e8127e7ed" (UID: "3af7b9f3-ab28-4971-9cde-112e8127e7ed"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.059106 4804 scope.go:117] "RemoveContainer" containerID="9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.079285 4804 scope.go:117] "RemoveContainer" containerID="2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.097072 4804 scope.go:117] "RemoveContainer" containerID="f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.116516 4804 scope.go:117] "RemoveContainer" containerID="ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4" Jan 28 11:43:34 crc kubenswrapper[4804]: E0128 11:43:34.117035 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4\": container with ID starting with ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4 not found: ID does not exist" containerID="ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.117134 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4"} err="failed to get container status \"ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4\": rpc error: code = NotFound desc = could not find container \"ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4\": container with ID starting with ced2884044f00b29af54fb771f49eef72185742941edec8a049f81ea853cd4f4 not found: ID does not exist" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.117221 4804 scope.go:117] "RemoveContainer" containerID="9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916" Jan 28 11:43:34 crc kubenswrapper[4804]: E0128 11:43:34.117615 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916\": container with ID starting with 9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916 not found: ID does not exist" containerID="9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.117652 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916"} err="failed to get container status \"9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916\": rpc error: code = NotFound desc = could not find container \"9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916\": container with ID starting with 9a37652a23e7c544851cbaf9e26ab389947e8a1ff5402e257beb4ebfc16e5916 not found: ID does not exist" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.117677 4804 scope.go:117] "RemoveContainer" containerID="2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c" Jan 28 11:43:34 crc kubenswrapper[4804]: E0128 11:43:34.118111 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c\": container with ID starting with 2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c not found: ID does not exist" containerID="2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.118200 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c"} err="failed to get container status \"2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c\": rpc error: code = NotFound desc = could not find container \"2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c\": container with ID starting with 2e4a3ce57df1f6a576a240cdaf887e2e0e1c4cb4457df84788c811c644ceba6c not found: ID does not exist" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.118267 4804 scope.go:117] "RemoveContainer" containerID="f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60" Jan 28 11:43:34 crc kubenswrapper[4804]: E0128 11:43:34.118597 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60\": container with ID starting with f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60 not found: ID does not exist" containerID="f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.118648 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60"} err="failed to get container status \"f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60\": rpc error: code = NotFound desc = could not find container \"f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60\": container with ID starting with f3a2c90d276640c16a0a71ec3bb8e784f1e76a52a920864ccb92e4161a595e60 not found: ID does not exist" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.141367 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3af7b9f3-ab28-4971-9cde-112e8127e7ed-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.364368 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.374344 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.387991 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:34 crc kubenswrapper[4804]: E0128 11:43:34.388523 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="proxy-httpd" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.388543 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="proxy-httpd" Jan 28 11:43:34 crc kubenswrapper[4804]: E0128 11:43:34.388561 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="ceilometer-notification-agent" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.388571 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="ceilometer-notification-agent" Jan 28 11:43:34 crc kubenswrapper[4804]: E0128 11:43:34.388591 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="sg-core" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.388598 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="sg-core" Jan 28 11:43:34 crc kubenswrapper[4804]: E0128 11:43:34.388620 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="ceilometer-central-agent" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.388627 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="ceilometer-central-agent" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.388822 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="proxy-httpd" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.388847 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="sg-core" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.388857 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="ceilometer-central-agent" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.388869 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" containerName="ceilometer-notification-agent" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.390855 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.393760 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.398724 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.426373 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.550097 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-scripts\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.550161 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmm2q\" (UniqueName: \"kubernetes.io/projected/f3580297-d401-446c-818f-fbb89e50c757-kube-api-access-rmm2q\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.550196 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.550296 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-log-httpd\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.550364 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.550398 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-run-httpd\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.550499 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-config-data\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.651754 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-config-data\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.651868 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-scripts\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.651916 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmm2q\" (UniqueName: \"kubernetes.io/projected/f3580297-d401-446c-818f-fbb89e50c757-kube-api-access-rmm2q\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.651947 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.651993 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-log-httpd\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.652043 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.652076 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-run-httpd\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.652633 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-run-httpd\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.652751 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-log-httpd\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.655876 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.656000 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-scripts\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.656675 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-config-data\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.659033 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.671954 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmm2q\" (UniqueName: \"kubernetes.io/projected/f3580297-d401-446c-818f-fbb89e50c757-kube-api-access-rmm2q\") pod \"ceilometer-0\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.713730 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:43:34 crc kubenswrapper[4804]: I0128 11:43:34.928776 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3af7b9f3-ab28-4971-9cde-112e8127e7ed" path="/var/lib/kubelet/pods/3af7b9f3-ab28-4971-9cde-112e8127e7ed/volumes" Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.137182 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:43:35 crc kubenswrapper[4804]: W0128 11:43:35.138024 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3580297_d401_446c_818f_fbb89e50c757.slice/crio-14a4c38d4f2d56c74c58972e9ed2fa41c69a7c52d8e83ec184580c877203f8a9 WatchSource:0}: Error finding container 14a4c38d4f2d56c74c58972e9ed2fa41c69a7c52d8e83ec184580c877203f8a9: Status 404 returned error can't find the container with id 14a4c38d4f2d56c74c58972e9ed2fa41c69a7c52d8e83ec184580c877203f8a9 Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.286271 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.363048 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nv5np\" (UniqueName: \"kubernetes.io/projected/359ecb47-f044-4273-8589-c0ceedb367b5-kube-api-access-nv5np\") pod \"359ecb47-f044-4273-8589-c0ceedb367b5\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.363108 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-combined-ca-bundle\") pod \"359ecb47-f044-4273-8589-c0ceedb367b5\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.363150 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-config-data\") pod \"359ecb47-f044-4273-8589-c0ceedb367b5\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.363291 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-scripts\") pod \"359ecb47-f044-4273-8589-c0ceedb367b5\" (UID: \"359ecb47-f044-4273-8589-c0ceedb367b5\") " Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.368717 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/359ecb47-f044-4273-8589-c0ceedb367b5-kube-api-access-nv5np" (OuterVolumeSpecName: "kube-api-access-nv5np") pod "359ecb47-f044-4273-8589-c0ceedb367b5" (UID: "359ecb47-f044-4273-8589-c0ceedb367b5"). InnerVolumeSpecName "kube-api-access-nv5np". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.369101 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-scripts" (OuterVolumeSpecName: "scripts") pod "359ecb47-f044-4273-8589-c0ceedb367b5" (UID: "359ecb47-f044-4273-8589-c0ceedb367b5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.389143 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "359ecb47-f044-4273-8589-c0ceedb367b5" (UID: "359ecb47-f044-4273-8589-c0ceedb367b5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.389834 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-config-data" (OuterVolumeSpecName: "config-data") pod "359ecb47-f044-4273-8589-c0ceedb367b5" (UID: "359ecb47-f044-4273-8589-c0ceedb367b5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.465912 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.465940 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nv5np\" (UniqueName: \"kubernetes.io/projected/359ecb47-f044-4273-8589-c0ceedb367b5-kube-api-access-nv5np\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.465953 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:35 crc kubenswrapper[4804]: I0128 11:43:35.465963 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/359ecb47-f044-4273-8589-c0ceedb367b5-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.050768 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-qbth2" event={"ID":"359ecb47-f044-4273-8589-c0ceedb367b5","Type":"ContainerDied","Data":"daf675501f00de9fd7512d405b357c4cc69caeb3e4856b26378f4e1945fe6e76"} Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.051170 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="daf675501f00de9fd7512d405b357c4cc69caeb3e4856b26378f4e1945fe6e76" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.050792 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-qbth2" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.056164 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3580297-d401-446c-818f-fbb89e50c757","Type":"ContainerStarted","Data":"10a183c3b80dc4ee328a053d5a9ee94912af618329f720155eaad991e5227301"} Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.056203 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3580297-d401-446c-818f-fbb89e50c757","Type":"ContainerStarted","Data":"14a4c38d4f2d56c74c58972e9ed2fa41c69a7c52d8e83ec184580c877203f8a9"} Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.190186 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 11:43:36 crc kubenswrapper[4804]: E0128 11:43:36.190552 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="359ecb47-f044-4273-8589-c0ceedb367b5" containerName="nova-cell0-conductor-db-sync" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.190565 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="359ecb47-f044-4273-8589-c0ceedb367b5" containerName="nova-cell0-conductor-db-sync" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.190727 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="359ecb47-f044-4273-8589-c0ceedb367b5" containerName="nova-cell0-conductor-db-sync" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.191320 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.193295 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-kcpcp" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.193328 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.207224 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.279101 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.279205 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.279286 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t74n8\" (UniqueName: \"kubernetes.io/projected/8cb48af9-edd2-404a-9d56-afedbfa79f07-kube-api-access-t74n8\") pod \"nova-cell0-conductor-0\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.380546 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.380689 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.380834 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t74n8\" (UniqueName: \"kubernetes.io/projected/8cb48af9-edd2-404a-9d56-afedbfa79f07-kube-api-access-t74n8\") pod \"nova-cell0-conductor-0\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.389444 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.389658 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.408291 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t74n8\" (UniqueName: \"kubernetes.io/projected/8cb48af9-edd2-404a-9d56-afedbfa79f07-kube-api-access-t74n8\") pod \"nova-cell0-conductor-0\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.505289 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:36 crc kubenswrapper[4804]: I0128 11:43:36.828402 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 11:43:37 crc kubenswrapper[4804]: I0128 11:43:37.068631 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8cb48af9-edd2-404a-9d56-afedbfa79f07","Type":"ContainerStarted","Data":"c41ec5eb61e29312ebbde6dd9b201b0e68fdaaa8fb1724740ba107ac19157740"} Jan 28 11:43:37 crc kubenswrapper[4804]: I0128 11:43:37.071166 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3580297-d401-446c-818f-fbb89e50c757","Type":"ContainerStarted","Data":"1d8aa855c628bc141777e622e822953c4716ead40c6674117561e04a96dece56"} Jan 28 11:43:38 crc kubenswrapper[4804]: I0128 11:43:38.086582 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3580297-d401-446c-818f-fbb89e50c757","Type":"ContainerStarted","Data":"1bfe46f5b1d876490985e1a3f6d0da59f78789571798235a855e1d56bee636d6"} Jan 28 11:43:38 crc kubenswrapper[4804]: I0128 11:43:38.088825 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8cb48af9-edd2-404a-9d56-afedbfa79f07","Type":"ContainerStarted","Data":"fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441"} Jan 28 11:43:38 crc kubenswrapper[4804]: I0128 11:43:38.114690 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.114672563 podStartE2EDuration="2.114672563s" podCreationTimestamp="2026-01-28 11:43:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:43:38.105470634 +0000 UTC m=+1293.900350618" watchObservedRunningTime="2026-01-28 11:43:38.114672563 +0000 UTC m=+1293.909552547" Jan 28 11:43:39 crc kubenswrapper[4804]: I0128 11:43:39.096189 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:40 crc kubenswrapper[4804]: I0128 11:43:40.108993 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3580297-d401-446c-818f-fbb89e50c757","Type":"ContainerStarted","Data":"2509bf4b13f32a9d4b208c9c897e5e615b25378e75e4b250d7877c40fc99630f"} Jan 28 11:43:40 crc kubenswrapper[4804]: I0128 11:43:40.109321 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 11:43:40 crc kubenswrapper[4804]: I0128 11:43:40.128274 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.116048919 podStartE2EDuration="6.128261575s" podCreationTimestamp="2026-01-28 11:43:34 +0000 UTC" firstStartedPulling="2026-01-28 11:43:35.140686647 +0000 UTC m=+1290.935566631" lastFinishedPulling="2026-01-28 11:43:39.152899293 +0000 UTC m=+1294.947779287" observedRunningTime="2026-01-28 11:43:40.127243863 +0000 UTC m=+1295.922123847" watchObservedRunningTime="2026-01-28 11:43:40.128261575 +0000 UTC m=+1295.923141559" Jan 28 11:43:42 crc kubenswrapper[4804]: I0128 11:43:42.583409 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:43:42 crc kubenswrapper[4804]: I0128 11:43:42.583823 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:43:46 crc kubenswrapper[4804]: I0128 11:43:46.529692 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.008466 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-blnpq"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.010244 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.013162 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.013190 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.021609 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-blnpq"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.095007 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-config-data\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.095357 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-scripts\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.095433 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4qw4\" (UniqueName: \"kubernetes.io/projected/f76909b5-2ed7-476f-8f90-d8c9d168af6d-kube-api-access-q4qw4\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.095572 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.197103 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.197416 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-config-data\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.197478 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-scripts\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.197655 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4qw4\" (UniqueName: \"kubernetes.io/projected/f76909b5-2ed7-476f-8f90-d8c9d168af6d-kube-api-access-q4qw4\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.207122 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-scripts\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.211262 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.232722 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-config-data\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.236842 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4qw4\" (UniqueName: \"kubernetes.io/projected/f76909b5-2ed7-476f-8f90-d8c9d168af6d-kube-api-access-q4qw4\") pod \"nova-cell0-cell-mapping-blnpq\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.252086 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.253611 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.256347 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.275330 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.281692 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.284904 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.298906 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-config-data\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.298971 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d67f7045-5136-4adb-af27-14ff32c4c2ea-logs\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.303011 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7s4d\" (UniqueName: \"kubernetes.io/projected/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-kube-api-access-q7s4d\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.303142 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.303231 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56krv\" (UniqueName: \"kubernetes.io/projected/d67f7045-5136-4adb-af27-14ff32c4c2ea-kube-api-access-56krv\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.303256 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-logs\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.303329 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.303448 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-config-data\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.304165 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.323157 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.330980 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.354717 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.356864 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.359305 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.383270 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.406120 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.406244 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-config-data\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.406304 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " pod="openstack/nova-scheduler-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.406321 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-config-data\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.406372 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h52j4\" (UniqueName: \"kubernetes.io/projected/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-kube-api-access-h52j4\") pod \"nova-scheduler-0\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " pod="openstack/nova-scheduler-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.406394 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d67f7045-5136-4adb-af27-14ff32c4c2ea-logs\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.406458 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7s4d\" (UniqueName: \"kubernetes.io/projected/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-kube-api-access-q7s4d\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.406492 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.406520 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-config-data\") pod \"nova-scheduler-0\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " pod="openstack/nova-scheduler-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.406553 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-56krv\" (UniqueName: \"kubernetes.io/projected/d67f7045-5136-4adb-af27-14ff32c4c2ea-kube-api-access-56krv\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.406571 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-logs\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.407102 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-logs\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.412797 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d67f7045-5136-4adb-af27-14ff32c4c2ea-logs\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.413182 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.424036 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.428638 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-config-data\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.430085 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-config-data\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.448950 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.450226 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.454209 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-56krv\" (UniqueName: \"kubernetes.io/projected/d67f7045-5136-4adb-af27-14ff32c4c2ea-kube-api-access-56krv\") pod \"nova-api-0\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.457269 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.462739 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7s4d\" (UniqueName: \"kubernetes.io/projected/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-kube-api-access-q7s4d\") pod \"nova-metadata-0\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.475004 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.501972 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-9f892"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.508934 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-config-data\") pod \"nova-scheduler-0\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " pod="openstack/nova-scheduler-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.509075 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.509113 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " pod="openstack/nova-scheduler-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.509151 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.509175 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h52j4\" (UniqueName: \"kubernetes.io/projected/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-kube-api-access-h52j4\") pod \"nova-scheduler-0\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " pod="openstack/nova-scheduler-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.509217 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2dpn\" (UniqueName: \"kubernetes.io/projected/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-kube-api-access-z2dpn\") pod \"nova-cell1-novncproxy-0\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.512150 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.514208 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-config-data\") pod \"nova-scheduler-0\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " pod="openstack/nova-scheduler-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.520385 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " pod="openstack/nova-scheduler-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.524289 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-9f892"] Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.528789 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.541668 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h52j4\" (UniqueName: \"kubernetes.io/projected/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-kube-api-access-h52j4\") pod \"nova-scheduler-0\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " pod="openstack/nova-scheduler-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.546144 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.560005 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.610853 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.612207 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-svc\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.612303 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.612485 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.612567 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z2dpn\" (UniqueName: \"kubernetes.io/projected/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-kube-api-access-z2dpn\") pod \"nova-cell1-novncproxy-0\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.612737 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-config\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.612787 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l7t2v\" (UniqueName: \"kubernetes.io/projected/913fe193-1d5f-4561-9618-fde749a25a1d-kube-api-access-l7t2v\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.612832 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.612912 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.619017 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.631703 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.633696 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z2dpn\" (UniqueName: \"kubernetes.io/projected/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-kube-api-access-z2dpn\") pod \"nova-cell1-novncproxy-0\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.714950 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-config\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.715008 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l7t2v\" (UniqueName: \"kubernetes.io/projected/913fe193-1d5f-4561-9618-fde749a25a1d-kube-api-access-l7t2v\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.715049 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.715105 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.715430 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-svc\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.715488 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.716290 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-config\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.716442 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-swift-storage-0\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.717480 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-svc\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.718294 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-sb\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.720692 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-nb\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.737664 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7t2v\" (UniqueName: \"kubernetes.io/projected/913fe193-1d5f-4561-9618-fde749a25a1d-kube-api-access-l7t2v\") pod \"dnsmasq-dns-865f5d856f-9f892\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.872774 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:47 crc kubenswrapper[4804]: I0128 11:43:47.889303 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.037098 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-blnpq"] Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.138551 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:43:48 crc kubenswrapper[4804]: W0128 11:43:48.139261 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd67f7045_5136_4adb_af27_14ff32c4c2ea.slice/crio-7709d8911001699b6303fb7289a9df495a6178d282befad8adee9c09bde9fc52 WatchSource:0}: Error finding container 7709d8911001699b6303fb7289a9df495a6178d282befad8adee9c09bde9fc52: Status 404 returned error can't find the container with id 7709d8911001699b6303fb7289a9df495a6178d282befad8adee9c09bde9fc52 Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.203965 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-blnpq" event={"ID":"f76909b5-2ed7-476f-8f90-d8c9d168af6d","Type":"ContainerStarted","Data":"9caada4f3046311c64b68a2a14859289d74554adf28625be0bc6ad2f9d554fdc"} Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.207554 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d67f7045-5136-4adb-af27-14ff32c4c2ea","Type":"ContainerStarted","Data":"7709d8911001699b6303fb7289a9df495a6178d282befad8adee9c09bde9fc52"} Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.222340 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:43:48 crc kubenswrapper[4804]: W0128 11:43:48.240036 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc02dc60a_4990_4b17_8ebd_7b0b58ac8d90.slice/crio-89fefe8c0681022add25947cf943e711345231c628a2423e37a026f425da2ce0 WatchSource:0}: Error finding container 89fefe8c0681022add25947cf943e711345231c628a2423e37a026f425da2ce0: Status 404 returned error can't find the container with id 89fefe8c0681022add25947cf943e711345231c628a2423e37a026f425da2ce0 Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.325063 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t5xcd"] Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.326531 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: W0128 11:43:48.330816 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4afa58a1_e3ce_42e1_a0d7_bf0c57459ed2.slice/crio-523556f0a73e01c714755560786dfc8607783ee1ae02626c93c086b531e71383 WatchSource:0}: Error finding container 523556f0a73e01c714755560786dfc8607783ee1ae02626c93c086b531e71383: Status 404 returned error can't find the container with id 523556f0a73e01c714755560786dfc8607783ee1ae02626c93c086b531e71383 Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.331597 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.331750 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.348514 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t5xcd"] Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.362354 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.429210 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-config-data\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.429434 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.429478 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-scripts\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.429679 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7t5jd\" (UniqueName: \"kubernetes.io/projected/f35650b1-56b4-49fb-9ecc-9aa90a1386db-kube-api-access-7t5jd\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.491301 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.531468 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.531561 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-scripts\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.531648 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7t5jd\" (UniqueName: \"kubernetes.io/projected/f35650b1-56b4-49fb-9ecc-9aa90a1386db-kube-api-access-7t5jd\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.531697 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-config-data\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.540193 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.540683 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-config-data\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.545517 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-scripts\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.550665 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7t5jd\" (UniqueName: \"kubernetes.io/projected/f35650b1-56b4-49fb-9ecc-9aa90a1386db-kube-api-access-7t5jd\") pod \"nova-cell1-conductor-db-sync-t5xcd\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:48 crc kubenswrapper[4804]: W0128 11:43:48.598513 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod913fe193_1d5f_4561_9618_fde749a25a1d.slice/crio-2cfff1780e426c4b862ba06c4d5d217da66ed95a6d5a815238b1e3776a2afeb4 WatchSource:0}: Error finding container 2cfff1780e426c4b862ba06c4d5d217da66ed95a6d5a815238b1e3776a2afeb4: Status 404 returned error can't find the container with id 2cfff1780e426c4b862ba06c4d5d217da66ed95a6d5a815238b1e3776a2afeb4 Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.599495 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-9f892"] Jan 28 11:43:48 crc kubenswrapper[4804]: I0128 11:43:48.698546 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:49 crc kubenswrapper[4804]: I0128 11:43:49.202425 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t5xcd"] Jan 28 11:43:49 crc kubenswrapper[4804]: I0128 11:43:49.219695 4804 generic.go:334] "Generic (PLEG): container finished" podID="913fe193-1d5f-4561-9618-fde749a25a1d" containerID="5b66ffd3825053b82c96be643a5c4b3e14230fd04a94235eb6e84c88e45a3ddc" exitCode=0 Jan 28 11:43:49 crc kubenswrapper[4804]: I0128 11:43:49.219861 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-9f892" event={"ID":"913fe193-1d5f-4561-9618-fde749a25a1d","Type":"ContainerDied","Data":"5b66ffd3825053b82c96be643a5c4b3e14230fd04a94235eb6e84c88e45a3ddc"} Jan 28 11:43:49 crc kubenswrapper[4804]: I0128 11:43:49.220015 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-9f892" event={"ID":"913fe193-1d5f-4561-9618-fde749a25a1d","Type":"ContainerStarted","Data":"2cfff1780e426c4b862ba06c4d5d217da66ed95a6d5a815238b1e3776a2afeb4"} Jan 28 11:43:49 crc kubenswrapper[4804]: I0128 11:43:49.223433 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"84b18213-5ffe-40a4-b2f7-a8bb117d9a79","Type":"ContainerStarted","Data":"30cd19d729fe0a8f365f4576d67a9396141f36b3555091744e62104b74b1d641"} Jan 28 11:43:49 crc kubenswrapper[4804]: I0128 11:43:49.229671 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2","Type":"ContainerStarted","Data":"523556f0a73e01c714755560786dfc8607783ee1ae02626c93c086b531e71383"} Jan 28 11:43:49 crc kubenswrapper[4804]: I0128 11:43:49.232000 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-blnpq" event={"ID":"f76909b5-2ed7-476f-8f90-d8c9d168af6d","Type":"ContainerStarted","Data":"a2eabfea7974e19dcb056faf4aba79a46119c1df2377b8eb64616fb881ba0268"} Jan 28 11:43:49 crc kubenswrapper[4804]: I0128 11:43:49.234265 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90","Type":"ContainerStarted","Data":"89fefe8c0681022add25947cf943e711345231c628a2423e37a026f425da2ce0"} Jan 28 11:43:49 crc kubenswrapper[4804]: I0128 11:43:49.266919 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-blnpq" podStartSLOduration=3.266897485 podStartE2EDuration="3.266897485s" podCreationTimestamp="2026-01-28 11:43:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:43:49.255792068 +0000 UTC m=+1305.050672062" watchObservedRunningTime="2026-01-28 11:43:49.266897485 +0000 UTC m=+1305.061777469" Jan 28 11:43:50 crc kubenswrapper[4804]: I0128 11:43:50.246864 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-9f892" event={"ID":"913fe193-1d5f-4561-9618-fde749a25a1d","Type":"ContainerStarted","Data":"a420e8ae687c2c995a304fcdb8d308a9f54e0ef0b5158ed4df80a9da46192b9f"} Jan 28 11:43:50 crc kubenswrapper[4804]: I0128 11:43:50.247666 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:50 crc kubenswrapper[4804]: I0128 11:43:50.252023 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t5xcd" event={"ID":"f35650b1-56b4-49fb-9ecc-9aa90a1386db","Type":"ContainerStarted","Data":"14d679b7ac81e4e13ea78d091c6bcc493eebbfb6bcb668dffab054c4661eb685"} Jan 28 11:43:50 crc kubenswrapper[4804]: I0128 11:43:50.252055 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t5xcd" event={"ID":"f35650b1-56b4-49fb-9ecc-9aa90a1386db","Type":"ContainerStarted","Data":"4b89b20bfbaf4f095edc6956fb3ba47586bb3c4fc669a11ca40dee8d933950d5"} Jan 28 11:43:50 crc kubenswrapper[4804]: I0128 11:43:50.278210 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-865f5d856f-9f892" podStartSLOduration=3.278191113 podStartE2EDuration="3.278191113s" podCreationTimestamp="2026-01-28 11:43:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:43:50.27268131 +0000 UTC m=+1306.067561294" watchObservedRunningTime="2026-01-28 11:43:50.278191113 +0000 UTC m=+1306.073071097" Jan 28 11:43:50 crc kubenswrapper[4804]: I0128 11:43:50.295105 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-t5xcd" podStartSLOduration=2.295084872 podStartE2EDuration="2.295084872s" podCreationTimestamp="2026-01-28 11:43:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:43:50.290182338 +0000 UTC m=+1306.085062322" watchObservedRunningTime="2026-01-28 11:43:50.295084872 +0000 UTC m=+1306.089964856" Jan 28 11:43:50 crc kubenswrapper[4804]: I0128 11:43:50.846540 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:43:50 crc kubenswrapper[4804]: I0128 11:43:50.895859 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.281113 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"84b18213-5ffe-40a4-b2f7-a8bb117d9a79","Type":"ContainerStarted","Data":"a3ec7c22b5141bccbf8d04497f2060dbc86f6d11337e8f669677bbb61ab18959"} Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.281278 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="84b18213-5ffe-40a4-b2f7-a8bb117d9a79" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://a3ec7c22b5141bccbf8d04497f2060dbc86f6d11337e8f669677bbb61ab18959" gracePeriod=30 Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.282841 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2","Type":"ContainerStarted","Data":"caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d"} Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.286022 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" containerName="nova-metadata-log" containerID="cri-o://0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f" gracePeriod=30 Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.286232 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90","Type":"ContainerStarted","Data":"ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e"} Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.286279 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90","Type":"ContainerStarted","Data":"0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f"} Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.286343 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" containerName="nova-metadata-metadata" containerID="cri-o://ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e" gracePeriod=30 Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.292054 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d67f7045-5136-4adb-af27-14ff32c4c2ea","Type":"ContainerStarted","Data":"40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0"} Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.292112 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d67f7045-5136-4adb-af27-14ff32c4c2ea","Type":"ContainerStarted","Data":"e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587"} Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.319522 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.622947546 podStartE2EDuration="5.319495113s" podCreationTimestamp="2026-01-28 11:43:47 +0000 UTC" firstStartedPulling="2026-01-28 11:43:48.493709395 +0000 UTC m=+1304.288589379" lastFinishedPulling="2026-01-28 11:43:51.190256962 +0000 UTC m=+1306.985136946" observedRunningTime="2026-01-28 11:43:52.305759223 +0000 UTC m=+1308.100639197" watchObservedRunningTime="2026-01-28 11:43:52.319495113 +0000 UTC m=+1308.114375207" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.333241 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.482159506 podStartE2EDuration="5.333217122s" podCreationTimestamp="2026-01-28 11:43:47 +0000 UTC" firstStartedPulling="2026-01-28 11:43:48.333527449 +0000 UTC m=+1304.128407473" lastFinishedPulling="2026-01-28 11:43:51.184585105 +0000 UTC m=+1306.979465089" observedRunningTime="2026-01-28 11:43:52.325591134 +0000 UTC m=+1308.120471118" watchObservedRunningTime="2026-01-28 11:43:52.333217122 +0000 UTC m=+1308.128097096" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.354615 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.425206692 podStartE2EDuration="5.354579231s" podCreationTimestamp="2026-01-28 11:43:47 +0000 UTC" firstStartedPulling="2026-01-28 11:43:48.247640759 +0000 UTC m=+1304.042520743" lastFinishedPulling="2026-01-28 11:43:51.177013298 +0000 UTC m=+1306.971893282" observedRunningTime="2026-01-28 11:43:52.346787857 +0000 UTC m=+1308.141667851" watchObservedRunningTime="2026-01-28 11:43:52.354579231 +0000 UTC m=+1308.149459215" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.373572 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.377630253 podStartE2EDuration="5.373551246s" podCreationTimestamp="2026-01-28 11:43:47 +0000 UTC" firstStartedPulling="2026-01-28 11:43:48.189051644 +0000 UTC m=+1303.983931618" lastFinishedPulling="2026-01-28 11:43:51.184972627 +0000 UTC m=+1306.979852611" observedRunningTime="2026-01-28 11:43:52.368713434 +0000 UTC m=+1308.163593428" watchObservedRunningTime="2026-01-28 11:43:52.373551246 +0000 UTC m=+1308.168431230" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.546946 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.547242 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.561248 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.873250 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.929784 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.956093 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-config-data\") pod \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.957206 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q7s4d\" (UniqueName: \"kubernetes.io/projected/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-kube-api-access-q7s4d\") pod \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.957267 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-logs\") pod \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.957447 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-combined-ca-bundle\") pod \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\" (UID: \"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90\") " Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.957722 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-logs" (OuterVolumeSpecName: "logs") pod "c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" (UID: "c02dc60a-4990-4b17-8ebd-7b0b58ac8d90"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.958333 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.962402 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-kube-api-access-q7s4d" (OuterVolumeSpecName: "kube-api-access-q7s4d") pod "c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" (UID: "c02dc60a-4990-4b17-8ebd-7b0b58ac8d90"). InnerVolumeSpecName "kube-api-access-q7s4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:43:52 crc kubenswrapper[4804]: I0128 11:43:52.985583 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-config-data" (OuterVolumeSpecName: "config-data") pod "c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" (UID: "c02dc60a-4990-4b17-8ebd-7b0b58ac8d90"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.002230 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" (UID: "c02dc60a-4990-4b17-8ebd-7b0b58ac8d90"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.060402 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.060432 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q7s4d\" (UniqueName: \"kubernetes.io/projected/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-kube-api-access-q7s4d\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.060463 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.302292 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.302313 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90","Type":"ContainerDied","Data":"ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e"} Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.302379 4804 scope.go:117] "RemoveContainer" containerID="ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.302251 4804 generic.go:334] "Generic (PLEG): container finished" podID="c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" containerID="ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e" exitCode=0 Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.302423 4804 generic.go:334] "Generic (PLEG): container finished" podID="c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" containerID="0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f" exitCode=143 Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.302530 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90","Type":"ContainerDied","Data":"0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f"} Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.302579 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"c02dc60a-4990-4b17-8ebd-7b0b58ac8d90","Type":"ContainerDied","Data":"89fefe8c0681022add25947cf943e711345231c628a2423e37a026f425da2ce0"} Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.334383 4804 scope.go:117] "RemoveContainer" containerID="0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.349277 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.363834 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.381112 4804 scope.go:117] "RemoveContainer" containerID="ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e" Jan 28 11:43:53 crc kubenswrapper[4804]: E0128 11:43:53.381626 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e\": container with ID starting with ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e not found: ID does not exist" containerID="ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.381669 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e"} err="failed to get container status \"ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e\": rpc error: code = NotFound desc = could not find container \"ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e\": container with ID starting with ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e not found: ID does not exist" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.381703 4804 scope.go:117] "RemoveContainer" containerID="0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.382337 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:43:53 crc kubenswrapper[4804]: E0128 11:43:53.382761 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" containerName="nova-metadata-log" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.382780 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" containerName="nova-metadata-log" Jan 28 11:43:53 crc kubenswrapper[4804]: E0128 11:43:53.382800 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" containerName="nova-metadata-metadata" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.382807 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" containerName="nova-metadata-metadata" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.383035 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" containerName="nova-metadata-log" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.383051 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" containerName="nova-metadata-metadata" Jan 28 11:43:53 crc kubenswrapper[4804]: E0128 11:43:53.387525 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f\": container with ID starting with 0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f not found: ID does not exist" containerID="0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.387567 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f"} err="failed to get container status \"0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f\": rpc error: code = NotFound desc = could not find container \"0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f\": container with ID starting with 0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f not found: ID does not exist" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.387594 4804 scope.go:117] "RemoveContainer" containerID="ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.389444 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e"} err="failed to get container status \"ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e\": rpc error: code = NotFound desc = could not find container \"ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e\": container with ID starting with ef8c084478cd48e38ed27a53752f416cdd8d6e7f2311ed3fdddfaa613bf86f0e not found: ID does not exist" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.389473 4804 scope.go:117] "RemoveContainer" containerID="0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.392611 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f"} err="failed to get container status \"0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f\": rpc error: code = NotFound desc = could not find container \"0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f\": container with ID starting with 0b8adab06df2be345d4a18e3bceb79319123684d56ff0614e5a707b0cb71b62f not found: ID does not exist" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.394281 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.395780 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.401838 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.402019 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.569555 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.570331 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-config-data\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.570389 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b27bc011-ed63-4b36-ae46-bba181d0989b-logs\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.570469 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcwjk\" (UniqueName: \"kubernetes.io/projected/b27bc011-ed63-4b36-ae46-bba181d0989b-kube-api-access-bcwjk\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.570562 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.672751 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-config-data\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.672808 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b27bc011-ed63-4b36-ae46-bba181d0989b-logs\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.672856 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bcwjk\" (UniqueName: \"kubernetes.io/projected/b27bc011-ed63-4b36-ae46-bba181d0989b-kube-api-access-bcwjk\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.672915 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.673028 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.673475 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b27bc011-ed63-4b36-ae46-bba181d0989b-logs\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.677940 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-config-data\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.678505 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.685248 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.688799 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bcwjk\" (UniqueName: \"kubernetes.io/projected/b27bc011-ed63-4b36-ae46-bba181d0989b-kube-api-access-bcwjk\") pod \"nova-metadata-0\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " pod="openstack/nova-metadata-0" Jan 28 11:43:53 crc kubenswrapper[4804]: I0128 11:43:53.723129 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:43:54 crc kubenswrapper[4804]: I0128 11:43:54.172271 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:43:54 crc kubenswrapper[4804]: W0128 11:43:54.176819 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb27bc011_ed63_4b36_ae46_bba181d0989b.slice/crio-7ad87104ee484966dd41a5010615ced3c765db64d29ab101c21b63bf343c102b WatchSource:0}: Error finding container 7ad87104ee484966dd41a5010615ced3c765db64d29ab101c21b63bf343c102b: Status 404 returned error can't find the container with id 7ad87104ee484966dd41a5010615ced3c765db64d29ab101c21b63bf343c102b Jan 28 11:43:54 crc kubenswrapper[4804]: I0128 11:43:54.322161 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b27bc011-ed63-4b36-ae46-bba181d0989b","Type":"ContainerStarted","Data":"7ad87104ee484966dd41a5010615ced3c765db64d29ab101c21b63bf343c102b"} Jan 28 11:43:54 crc kubenswrapper[4804]: I0128 11:43:54.940375 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c02dc60a-4990-4b17-8ebd-7b0b58ac8d90" path="/var/lib/kubelet/pods/c02dc60a-4990-4b17-8ebd-7b0b58ac8d90/volumes" Jan 28 11:43:55 crc kubenswrapper[4804]: I0128 11:43:55.334997 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b27bc011-ed63-4b36-ae46-bba181d0989b","Type":"ContainerStarted","Data":"0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620"} Jan 28 11:43:55 crc kubenswrapper[4804]: I0128 11:43:55.335345 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b27bc011-ed63-4b36-ae46-bba181d0989b","Type":"ContainerStarted","Data":"578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca"} Jan 28 11:43:55 crc kubenswrapper[4804]: I0128 11:43:55.368840 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.368822098 podStartE2EDuration="2.368822098s" podCreationTimestamp="2026-01-28 11:43:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:43:55.3679505 +0000 UTC m=+1311.162830484" watchObservedRunningTime="2026-01-28 11:43:55.368822098 +0000 UTC m=+1311.163702102" Jan 28 11:43:56 crc kubenswrapper[4804]: I0128 11:43:56.343923 4804 generic.go:334] "Generic (PLEG): container finished" podID="f35650b1-56b4-49fb-9ecc-9aa90a1386db" containerID="14d679b7ac81e4e13ea78d091c6bcc493eebbfb6bcb668dffab054c4661eb685" exitCode=0 Jan 28 11:43:56 crc kubenswrapper[4804]: I0128 11:43:56.343997 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t5xcd" event={"ID":"f35650b1-56b4-49fb-9ecc-9aa90a1386db","Type":"ContainerDied","Data":"14d679b7ac81e4e13ea78d091c6bcc493eebbfb6bcb668dffab054c4661eb685"} Jan 28 11:43:56 crc kubenswrapper[4804]: I0128 11:43:56.345361 4804 generic.go:334] "Generic (PLEG): container finished" podID="f76909b5-2ed7-476f-8f90-d8c9d168af6d" containerID="a2eabfea7974e19dcb056faf4aba79a46119c1df2377b8eb64616fb881ba0268" exitCode=0 Jan 28 11:43:56 crc kubenswrapper[4804]: I0128 11:43:56.345433 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-blnpq" event={"ID":"f76909b5-2ed7-476f-8f90-d8c9d168af6d","Type":"ContainerDied","Data":"a2eabfea7974e19dcb056faf4aba79a46119c1df2377b8eb64616fb881ba0268"} Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.529274 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.529676 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.563244 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.595489 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.800117 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.805179 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.886966 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-combined-ca-bundle\") pod \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.887102 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4qw4\" (UniqueName: \"kubernetes.io/projected/f76909b5-2ed7-476f-8f90-d8c9d168af6d-kube-api-access-q4qw4\") pod \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.887153 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7t5jd\" (UniqueName: \"kubernetes.io/projected/f35650b1-56b4-49fb-9ecc-9aa90a1386db-kube-api-access-7t5jd\") pod \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.887245 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-config-data\") pod \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.887301 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-combined-ca-bundle\") pod \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.887362 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-scripts\") pod \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.887395 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-config-data\") pod \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\" (UID: \"f76909b5-2ed7-476f-8f90-d8c9d168af6d\") " Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.887422 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-scripts\") pod \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\" (UID: \"f35650b1-56b4-49fb-9ecc-9aa90a1386db\") " Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.891049 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.894330 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f76909b5-2ed7-476f-8f90-d8c9d168af6d-kube-api-access-q4qw4" (OuterVolumeSpecName: "kube-api-access-q4qw4") pod "f76909b5-2ed7-476f-8f90-d8c9d168af6d" (UID: "f76909b5-2ed7-476f-8f90-d8c9d168af6d"). InnerVolumeSpecName "kube-api-access-q4qw4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.895100 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f35650b1-56b4-49fb-9ecc-9aa90a1386db-kube-api-access-7t5jd" (OuterVolumeSpecName: "kube-api-access-7t5jd") pod "f35650b1-56b4-49fb-9ecc-9aa90a1386db" (UID: "f35650b1-56b4-49fb-9ecc-9aa90a1386db"). InnerVolumeSpecName "kube-api-access-7t5jd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.895986 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-scripts" (OuterVolumeSpecName: "scripts") pod "f76909b5-2ed7-476f-8f90-d8c9d168af6d" (UID: "f76909b5-2ed7-476f-8f90-d8c9d168af6d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.897028 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-scripts" (OuterVolumeSpecName: "scripts") pod "f35650b1-56b4-49fb-9ecc-9aa90a1386db" (UID: "f35650b1-56b4-49fb-9ecc-9aa90a1386db"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.943085 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-config-data" (OuterVolumeSpecName: "config-data") pod "f76909b5-2ed7-476f-8f90-d8c9d168af6d" (UID: "f76909b5-2ed7-476f-8f90-d8c9d168af6d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:57 crc kubenswrapper[4804]: I0128 11:43:57.986157 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f35650b1-56b4-49fb-9ecc-9aa90a1386db" (UID: "f35650b1-56b4-49fb-9ecc-9aa90a1386db"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:57.996012 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4qw4\" (UniqueName: \"kubernetes.io/projected/f76909b5-2ed7-476f-8f90-d8c9d168af6d-kube-api-access-q4qw4\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:57.996062 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7t5jd\" (UniqueName: \"kubernetes.io/projected/f35650b1-56b4-49fb-9ecc-9aa90a1386db-kube-api-access-7t5jd\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:57.996076 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:57.996097 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:57.996109 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:57.996122 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:57.997183 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-kzz4k"] Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:57.997585 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" podUID="2b276638-3e05-4295-825f-321552970394" containerName="dnsmasq-dns" containerID="cri-o://109925f98b98bd19bc310a0910c394cca6331ef46f59894bad1048ee96f57b9e" gracePeriod=10 Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.068696 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f76909b5-2ed7-476f-8f90-d8c9d168af6d" (UID: "f76909b5-2ed7-476f-8f90-d8c9d168af6d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.075262 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-config-data" (OuterVolumeSpecName: "config-data") pod "f35650b1-56b4-49fb-9ecc-9aa90a1386db" (UID: "f35650b1-56b4-49fb-9ecc-9aa90a1386db"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.100512 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f35650b1-56b4-49fb-9ecc-9aa90a1386db-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.100553 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f76909b5-2ed7-476f-8f90-d8c9d168af6d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.373092 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-t5xcd" event={"ID":"f35650b1-56b4-49fb-9ecc-9aa90a1386db","Type":"ContainerDied","Data":"4b89b20bfbaf4f095edc6956fb3ba47586bb3c4fc669a11ca40dee8d933950d5"} Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.373368 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b89b20bfbaf4f095edc6956fb3ba47586bb3c4fc669a11ca40dee8d933950d5" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.373441 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-t5xcd" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.380205 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-blnpq" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.380205 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-blnpq" event={"ID":"f76909b5-2ed7-476f-8f90-d8c9d168af6d","Type":"ContainerDied","Data":"9caada4f3046311c64b68a2a14859289d74554adf28625be0bc6ad2f9d554fdc"} Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.380251 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9caada4f3046311c64b68a2a14859289d74554adf28625be0bc6ad2f9d554fdc" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.386194 4804 generic.go:334] "Generic (PLEG): container finished" podID="2b276638-3e05-4295-825f-321552970394" containerID="109925f98b98bd19bc310a0910c394cca6331ef46f59894bad1048ee96f57b9e" exitCode=0 Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.386701 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" event={"ID":"2b276638-3e05-4295-825f-321552970394","Type":"ContainerDied","Data":"109925f98b98bd19bc310a0910c394cca6331ef46f59894bad1048ee96f57b9e"} Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.476382 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 11:43:58 crc kubenswrapper[4804]: E0128 11:43:58.481620 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f76909b5-2ed7-476f-8f90-d8c9d168af6d" containerName="nova-manage" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.481662 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f76909b5-2ed7-476f-8f90-d8c9d168af6d" containerName="nova-manage" Jan 28 11:43:58 crc kubenswrapper[4804]: E0128 11:43:58.481742 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f35650b1-56b4-49fb-9ecc-9aa90a1386db" containerName="nova-cell1-conductor-db-sync" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.481751 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f35650b1-56b4-49fb-9ecc-9aa90a1386db" containerName="nova-cell1-conductor-db-sync" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.482075 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f76909b5-2ed7-476f-8f90-d8c9d168af6d" containerName="nova-manage" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.482108 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f35650b1-56b4-49fb-9ecc-9aa90a1386db" containerName="nova-cell1-conductor-db-sync" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.482976 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.488998 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.490794 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.534523 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.536909 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " pod="openstack/nova-cell1-conductor-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.536953 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98jpw\" (UniqueName: \"kubernetes.io/projected/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-kube-api-access-98jpw\") pod \"nova-cell1-conductor-0\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " pod="openstack/nova-cell1-conductor-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.537034 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " pod="openstack/nova-cell1-conductor-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.565071 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.617141 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.617139 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.188:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.638109 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-svc\") pod \"2b276638-3e05-4295-825f-321552970394\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.638338 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-sb\") pod \"2b276638-3e05-4295-825f-321552970394\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.638454 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-nb\") pod \"2b276638-3e05-4295-825f-321552970394\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.638491 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z7dm\" (UniqueName: \"kubernetes.io/projected/2b276638-3e05-4295-825f-321552970394-kube-api-access-8z7dm\") pod \"2b276638-3e05-4295-825f-321552970394\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.638643 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-swift-storage-0\") pod \"2b276638-3e05-4295-825f-321552970394\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.638687 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-config\") pod \"2b276638-3e05-4295-825f-321552970394\" (UID: \"2b276638-3e05-4295-825f-321552970394\") " Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.639029 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " pod="openstack/nova-cell1-conductor-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.639220 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " pod="openstack/nova-cell1-conductor-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.639252 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98jpw\" (UniqueName: \"kubernetes.io/projected/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-kube-api-access-98jpw\") pod \"nova-cell1-conductor-0\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " pod="openstack/nova-cell1-conductor-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.645106 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b276638-3e05-4295-825f-321552970394-kube-api-access-8z7dm" (OuterVolumeSpecName: "kube-api-access-8z7dm") pod "2b276638-3e05-4295-825f-321552970394" (UID: "2b276638-3e05-4295-825f-321552970394"). InnerVolumeSpecName "kube-api-access-8z7dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.646754 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " pod="openstack/nova-cell1-conductor-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.656793 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " pod="openstack/nova-cell1-conductor-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.659856 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98jpw\" (UniqueName: \"kubernetes.io/projected/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-kube-api-access-98jpw\") pod \"nova-cell1-conductor-0\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " pod="openstack/nova-cell1-conductor-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.721243 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "2b276638-3e05-4295-825f-321552970394" (UID: "2b276638-3e05-4295-825f-321552970394"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.725111 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.726319 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.732499 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.732751 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerName="nova-api-log" containerID="cri-o://e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587" gracePeriod=30 Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.733247 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerName="nova-api-api" containerID="cri-o://40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0" gracePeriod=30 Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.741236 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.741267 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z7dm\" (UniqueName: \"kubernetes.io/projected/2b276638-3e05-4295-825f-321552970394-kube-api-access-8z7dm\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.745650 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "2b276638-3e05-4295-825f-321552970394" (UID: "2b276638-3e05-4295-825f-321552970394"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.746168 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-config" (OuterVolumeSpecName: "config") pod "2b276638-3e05-4295-825f-321552970394" (UID: "2b276638-3e05-4295-825f-321552970394"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.749520 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "2b276638-3e05-4295-825f-321552970394" (UID: "2b276638-3e05-4295-825f-321552970394"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.762362 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "2b276638-3e05-4295-825f-321552970394" (UID: "2b276638-3e05-4295-825f-321552970394"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.764452 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.816643 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.842928 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.842958 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.842969 4804 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:58 crc kubenswrapper[4804]: I0128 11:43:58.842979 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2b276638-3e05-4295-825f-321552970394-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:43:59 crc kubenswrapper[4804]: I0128 11:43:59.042132 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:43:59 crc kubenswrapper[4804]: W0128 11:43:59.298639 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8e88e9db_b96d_4009_a4e6_ccbb5be53f85.slice/crio-7a39a79e7a20e9ba4fa85ecd18b271a0dbca751974fc6c0f7c6352d267b04dea WatchSource:0}: Error finding container 7a39a79e7a20e9ba4fa85ecd18b271a0dbca751974fc6c0f7c6352d267b04dea: Status 404 returned error can't find the container with id 7a39a79e7a20e9ba4fa85ecd18b271a0dbca751974fc6c0f7c6352d267b04dea Jan 28 11:43:59 crc kubenswrapper[4804]: I0128 11:43:59.299128 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 11:43:59 crc kubenswrapper[4804]: I0128 11:43:59.399651 4804 generic.go:334] "Generic (PLEG): container finished" podID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerID="e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587" exitCode=143 Jan 28 11:43:59 crc kubenswrapper[4804]: I0128 11:43:59.399727 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d67f7045-5136-4adb-af27-14ff32c4c2ea","Type":"ContainerDied","Data":"e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587"} Jan 28 11:43:59 crc kubenswrapper[4804]: I0128 11:43:59.415608 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" event={"ID":"2b276638-3e05-4295-825f-321552970394","Type":"ContainerDied","Data":"5ac546ee98d5d28f78181c3225f300b9da32c9a6f7eeb78daa5bbc95aceb3b8d"} Jan 28 11:43:59 crc kubenswrapper[4804]: I0128 11:43:59.415666 4804 scope.go:117] "RemoveContainer" containerID="109925f98b98bd19bc310a0910c394cca6331ef46f59894bad1048ee96f57b9e" Jan 28 11:43:59 crc kubenswrapper[4804]: I0128 11:43:59.415697 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb4fc677f-kzz4k" Jan 28 11:43:59 crc kubenswrapper[4804]: I0128 11:43:59.417440 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8e88e9db-b96d-4009-a4e6-ccbb5be53f85","Type":"ContainerStarted","Data":"7a39a79e7a20e9ba4fa85ecd18b271a0dbca751974fc6c0f7c6352d267b04dea"} Jan 28 11:43:59 crc kubenswrapper[4804]: I0128 11:43:59.447006 4804 scope.go:117] "RemoveContainer" containerID="7d55e8f0ae30cf6b17f9255210f13d604f097d0227761c71497f25b925dfda5d" Jan 28 11:43:59 crc kubenswrapper[4804]: I0128 11:43:59.448280 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-kzz4k"] Jan 28 11:43:59 crc kubenswrapper[4804]: I0128 11:43:59.457278 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb4fc677f-kzz4k"] Jan 28 11:44:00 crc kubenswrapper[4804]: I0128 11:44:00.431166 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8e88e9db-b96d-4009-a4e6-ccbb5be53f85","Type":"ContainerStarted","Data":"87c8a05a13e5c4994ae379707a39a074a0eebbe05ff9792d9fd8e8f442678955"} Jan 28 11:44:00 crc kubenswrapper[4804]: I0128 11:44:00.431330 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b27bc011-ed63-4b36-ae46-bba181d0989b" containerName="nova-metadata-log" containerID="cri-o://578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca" gracePeriod=30 Jan 28 11:44:00 crc kubenswrapper[4804]: I0128 11:44:00.431367 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b27bc011-ed63-4b36-ae46-bba181d0989b" containerName="nova-metadata-metadata" containerID="cri-o://0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620" gracePeriod=30 Jan 28 11:44:00 crc kubenswrapper[4804]: I0128 11:44:00.432025 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2" containerName="nova-scheduler-scheduler" containerID="cri-o://caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d" gracePeriod=30 Jan 28 11:44:00 crc kubenswrapper[4804]: I0128 11:44:00.470816 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.470758777 podStartE2EDuration="2.470758777s" podCreationTimestamp="2026-01-28 11:43:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:44:00.46256654 +0000 UTC m=+1316.257446544" watchObservedRunningTime="2026-01-28 11:44:00.470758777 +0000 UTC m=+1316.265638761" Jan 28 11:44:00 crc kubenswrapper[4804]: I0128 11:44:00.927628 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2b276638-3e05-4295-825f-321552970394" path="/var/lib/kubelet/pods/2b276638-3e05-4295-825f-321552970394/volumes" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.020480 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.095550 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-nova-metadata-tls-certs\") pod \"b27bc011-ed63-4b36-ae46-bba181d0989b\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.095825 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b27bc011-ed63-4b36-ae46-bba181d0989b-logs\") pod \"b27bc011-ed63-4b36-ae46-bba181d0989b\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.095872 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-combined-ca-bundle\") pod \"b27bc011-ed63-4b36-ae46-bba181d0989b\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.096012 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bcwjk\" (UniqueName: \"kubernetes.io/projected/b27bc011-ed63-4b36-ae46-bba181d0989b-kube-api-access-bcwjk\") pod \"b27bc011-ed63-4b36-ae46-bba181d0989b\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.096072 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-config-data\") pod \"b27bc011-ed63-4b36-ae46-bba181d0989b\" (UID: \"b27bc011-ed63-4b36-ae46-bba181d0989b\") " Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.098049 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b27bc011-ed63-4b36-ae46-bba181d0989b-logs" (OuterVolumeSpecName: "logs") pod "b27bc011-ed63-4b36-ae46-bba181d0989b" (UID: "b27bc011-ed63-4b36-ae46-bba181d0989b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.112893 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b27bc011-ed63-4b36-ae46-bba181d0989b-kube-api-access-bcwjk" (OuterVolumeSpecName: "kube-api-access-bcwjk") pod "b27bc011-ed63-4b36-ae46-bba181d0989b" (UID: "b27bc011-ed63-4b36-ae46-bba181d0989b"). InnerVolumeSpecName "kube-api-access-bcwjk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.125451 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-config-data" (OuterVolumeSpecName: "config-data") pod "b27bc011-ed63-4b36-ae46-bba181d0989b" (UID: "b27bc011-ed63-4b36-ae46-bba181d0989b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.132085 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b27bc011-ed63-4b36-ae46-bba181d0989b" (UID: "b27bc011-ed63-4b36-ae46-bba181d0989b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.149930 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b27bc011-ed63-4b36-ae46-bba181d0989b" (UID: "b27bc011-ed63-4b36-ae46-bba181d0989b"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.198663 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bcwjk\" (UniqueName: \"kubernetes.io/projected/b27bc011-ed63-4b36-ae46-bba181d0989b-kube-api-access-bcwjk\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.198693 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.198702 4804 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.198712 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b27bc011-ed63-4b36-ae46-bba181d0989b-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.198720 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b27bc011-ed63-4b36-ae46-bba181d0989b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.441017 4804 generic.go:334] "Generic (PLEG): container finished" podID="b27bc011-ed63-4b36-ae46-bba181d0989b" containerID="0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620" exitCode=0 Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.441059 4804 generic.go:334] "Generic (PLEG): container finished" podID="b27bc011-ed63-4b36-ae46-bba181d0989b" containerID="578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca" exitCode=143 Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.441082 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.441122 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b27bc011-ed63-4b36-ae46-bba181d0989b","Type":"ContainerDied","Data":"0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620"} Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.441182 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b27bc011-ed63-4b36-ae46-bba181d0989b","Type":"ContainerDied","Data":"578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca"} Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.441195 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b27bc011-ed63-4b36-ae46-bba181d0989b","Type":"ContainerDied","Data":"7ad87104ee484966dd41a5010615ced3c765db64d29ab101c21b63bf343c102b"} Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.441218 4804 scope.go:117] "RemoveContainer" containerID="0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.442781 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.467821 4804 scope.go:117] "RemoveContainer" containerID="578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.472950 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.483900 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.498193 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:44:01 crc kubenswrapper[4804]: E0128 11:44:01.498732 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b276638-3e05-4295-825f-321552970394" containerName="init" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.498751 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b276638-3e05-4295-825f-321552970394" containerName="init" Jan 28 11:44:01 crc kubenswrapper[4804]: E0128 11:44:01.498767 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b276638-3e05-4295-825f-321552970394" containerName="dnsmasq-dns" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.498774 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b276638-3e05-4295-825f-321552970394" containerName="dnsmasq-dns" Jan 28 11:44:01 crc kubenswrapper[4804]: E0128 11:44:01.498791 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27bc011-ed63-4b36-ae46-bba181d0989b" containerName="nova-metadata-metadata" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.498799 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27bc011-ed63-4b36-ae46-bba181d0989b" containerName="nova-metadata-metadata" Jan 28 11:44:01 crc kubenswrapper[4804]: E0128 11:44:01.498812 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b27bc011-ed63-4b36-ae46-bba181d0989b" containerName="nova-metadata-log" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.498819 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b27bc011-ed63-4b36-ae46-bba181d0989b" containerName="nova-metadata-log" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.499045 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="b27bc011-ed63-4b36-ae46-bba181d0989b" containerName="nova-metadata-log" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.499063 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b276638-3e05-4295-825f-321552970394" containerName="dnsmasq-dns" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.499081 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="b27bc011-ed63-4b36-ae46-bba181d0989b" containerName="nova-metadata-metadata" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.500265 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.504625 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.504836 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.506909 4804 scope.go:117] "RemoveContainer" containerID="0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.513085 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:44:01 crc kubenswrapper[4804]: E0128 11:44:01.531128 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620\": container with ID starting with 0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620 not found: ID does not exist" containerID="0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.531207 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620"} err="failed to get container status \"0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620\": rpc error: code = NotFound desc = could not find container \"0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620\": container with ID starting with 0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620 not found: ID does not exist" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.531258 4804 scope.go:117] "RemoveContainer" containerID="578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca" Jan 28 11:44:01 crc kubenswrapper[4804]: E0128 11:44:01.531996 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca\": container with ID starting with 578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca not found: ID does not exist" containerID="578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.532037 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca"} err="failed to get container status \"578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca\": rpc error: code = NotFound desc = could not find container \"578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca\": container with ID starting with 578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca not found: ID does not exist" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.532064 4804 scope.go:117] "RemoveContainer" containerID="0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.532345 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620"} err="failed to get container status \"0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620\": rpc error: code = NotFound desc = could not find container \"0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620\": container with ID starting with 0f9af2daca568ec94e88ffbc4c36f5083b8c74560e6e1477c83d71776c9ab620 not found: ID does not exist" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.532396 4804 scope.go:117] "RemoveContainer" containerID="578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.532697 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca"} err="failed to get container status \"578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca\": rpc error: code = NotFound desc = could not find container \"578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca\": container with ID starting with 578c8b90934c217081a51bffc0ed8cecaad5f81e602aba35344e6c191b2175ca not found: ID does not exist" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.607675 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.607778 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcd72\" (UniqueName: \"kubernetes.io/projected/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-kube-api-access-zcd72\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.607806 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-config-data\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.607840 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-logs\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.607907 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.710020 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcd72\" (UniqueName: \"kubernetes.io/projected/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-kube-api-access-zcd72\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.710070 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-config-data\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.710111 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-logs\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.710181 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.710239 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.710774 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-logs\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.715003 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.715388 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-config-data\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.718369 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.726417 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcd72\" (UniqueName: \"kubernetes.io/projected/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-kube-api-access-zcd72\") pod \"nova-metadata-0\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " pod="openstack/nova-metadata-0" Jan 28 11:44:01 crc kubenswrapper[4804]: I0128 11:44:01.849770 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:44:02 crc kubenswrapper[4804]: I0128 11:44:02.289470 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:44:02 crc kubenswrapper[4804]: W0128 11:44:02.294602 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod60dd1bc0_1015_4f2e_8fe0_4e33e2fe36d3.slice/crio-59b87bf7f61b635191f6dbdc4606dc428f7e447869edcc7c8d84ce1e273ec312 WatchSource:0}: Error finding container 59b87bf7f61b635191f6dbdc4606dc428f7e447869edcc7c8d84ce1e273ec312: Status 404 returned error can't find the container with id 59b87bf7f61b635191f6dbdc4606dc428f7e447869edcc7c8d84ce1e273ec312 Jan 28 11:44:02 crc kubenswrapper[4804]: I0128 11:44:02.451429 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3","Type":"ContainerStarted","Data":"59b87bf7f61b635191f6dbdc4606dc428f7e447869edcc7c8d84ce1e273ec312"} Jan 28 11:44:02 crc kubenswrapper[4804]: E0128 11:44:02.567297 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 11:44:02 crc kubenswrapper[4804]: E0128 11:44:02.569357 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 11:44:02 crc kubenswrapper[4804]: E0128 11:44:02.570869 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 11:44:02 crc kubenswrapper[4804]: E0128 11:44:02.570968 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2" containerName="nova-scheduler-scheduler" Jan 28 11:44:02 crc kubenswrapper[4804]: I0128 11:44:02.924991 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b27bc011-ed63-4b36-ae46-bba181d0989b" path="/var/lib/kubelet/pods/b27bc011-ed63-4b36-ae46-bba181d0989b/volumes" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.308405 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.452263 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-config-data\") pod \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.452425 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-combined-ca-bundle\") pod \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.452490 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h52j4\" (UniqueName: \"kubernetes.io/projected/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-kube-api-access-h52j4\") pod \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\" (UID: \"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2\") " Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.456667 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-kube-api-access-h52j4" (OuterVolumeSpecName: "kube-api-access-h52j4") pod "4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2" (UID: "4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2"). InnerVolumeSpecName "kube-api-access-h52j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.468019 4804 generic.go:334] "Generic (PLEG): container finished" podID="4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2" containerID="caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d" exitCode=0 Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.468084 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2","Type":"ContainerDied","Data":"caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d"} Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.468111 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2","Type":"ContainerDied","Data":"523556f0a73e01c714755560786dfc8607783ee1ae02626c93c086b531e71383"} Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.468128 4804 scope.go:117] "RemoveContainer" containerID="caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.468223 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.473333 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3","Type":"ContainerStarted","Data":"2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a"} Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.473365 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3","Type":"ContainerStarted","Data":"ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f"} Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.485060 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2" (UID: "4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.517052 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-config-data" (OuterVolumeSpecName: "config-data") pod "4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2" (UID: "4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.556263 4804 scope.go:117] "RemoveContainer" containerID="caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d" Jan 28 11:44:03 crc kubenswrapper[4804]: E0128 11:44:03.556813 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d\": container with ID starting with caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d not found: ID does not exist" containerID="caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.556872 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d"} err="failed to get container status \"caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d\": rpc error: code = NotFound desc = could not find container \"caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d\": container with ID starting with caa559184c780ed199e23dbd2b838d87648f0b776b77413d709b9ae3cc57432d not found: ID does not exist" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.557268 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.557290 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.557317 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h52j4\" (UniqueName: \"kubernetes.io/projected/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2-kube-api-access-h52j4\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.792460 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.79244191 podStartE2EDuration="2.79244191s" podCreationTimestamp="2026-01-28 11:44:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:44:03.510239504 +0000 UTC m=+1319.305119488" watchObservedRunningTime="2026-01-28 11:44:03.79244191 +0000 UTC m=+1319.587321894" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.797930 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.809127 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.821987 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:44:03 crc kubenswrapper[4804]: E0128 11:44:03.822495 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2" containerName="nova-scheduler-scheduler" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.822519 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2" containerName="nova-scheduler-scheduler" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.822784 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2" containerName="nova-scheduler-scheduler" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.823634 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.825772 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.835274 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.964416 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rphbz\" (UniqueName: \"kubernetes.io/projected/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-kube-api-access-rphbz\") pod \"nova-scheduler-0\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.964495 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:03 crc kubenswrapper[4804]: I0128 11:44:03.964517 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-config-data\") pod \"nova-scheduler-0\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.067026 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rphbz\" (UniqueName: \"kubernetes.io/projected/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-kube-api-access-rphbz\") pod \"nova-scheduler-0\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.067501 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.067809 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-config-data\") pod \"nova-scheduler-0\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.073187 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.075017 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-config-data\") pod \"nova-scheduler-0\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.087213 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rphbz\" (UniqueName: \"kubernetes.io/projected/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-kube-api-access-rphbz\") pod \"nova-scheduler-0\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.143035 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.451480 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.488379 4804 generic.go:334] "Generic (PLEG): container finished" podID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerID="40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0" exitCode=0 Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.488455 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.488470 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d67f7045-5136-4adb-af27-14ff32c4c2ea","Type":"ContainerDied","Data":"40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0"} Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.488506 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"d67f7045-5136-4adb-af27-14ff32c4c2ea","Type":"ContainerDied","Data":"7709d8911001699b6303fb7289a9df495a6178d282befad8adee9c09bde9fc52"} Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.488528 4804 scope.go:117] "RemoveContainer" containerID="40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.513551 4804 scope.go:117] "RemoveContainer" containerID="e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.532576 4804 scope.go:117] "RemoveContainer" containerID="40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0" Jan 28 11:44:04 crc kubenswrapper[4804]: E0128 11:44:04.533159 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0\": container with ID starting with 40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0 not found: ID does not exist" containerID="40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.533200 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0"} err="failed to get container status \"40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0\": rpc error: code = NotFound desc = could not find container \"40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0\": container with ID starting with 40bf2d0420d0bb9f6ffaedc1da85766d0fb35ca1263d3ed4bb1ec0eda1368ad0 not found: ID does not exist" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.533226 4804 scope.go:117] "RemoveContainer" containerID="e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587" Jan 28 11:44:04 crc kubenswrapper[4804]: E0128 11:44:04.533617 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587\": container with ID starting with e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587 not found: ID does not exist" containerID="e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.533659 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587"} err="failed to get container status \"e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587\": rpc error: code = NotFound desc = could not find container \"e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587\": container with ID starting with e6e568d2d5f4965383f37229eb7653714ef91456b13a8bb2b5a214634cb93587 not found: ID does not exist" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.577293 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-config-data\") pod \"d67f7045-5136-4adb-af27-14ff32c4c2ea\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.577390 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-combined-ca-bundle\") pod \"d67f7045-5136-4adb-af27-14ff32c4c2ea\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.577557 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d67f7045-5136-4adb-af27-14ff32c4c2ea-logs\") pod \"d67f7045-5136-4adb-af27-14ff32c4c2ea\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.577593 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-56krv\" (UniqueName: \"kubernetes.io/projected/d67f7045-5136-4adb-af27-14ff32c4c2ea-kube-api-access-56krv\") pod \"d67f7045-5136-4adb-af27-14ff32c4c2ea\" (UID: \"d67f7045-5136-4adb-af27-14ff32c4c2ea\") " Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.578178 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d67f7045-5136-4adb-af27-14ff32c4c2ea-logs" (OuterVolumeSpecName: "logs") pod "d67f7045-5136-4adb-af27-14ff32c4c2ea" (UID: "d67f7045-5136-4adb-af27-14ff32c4c2ea"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.581622 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d67f7045-5136-4adb-af27-14ff32c4c2ea-kube-api-access-56krv" (OuterVolumeSpecName: "kube-api-access-56krv") pod "d67f7045-5136-4adb-af27-14ff32c4c2ea" (UID: "d67f7045-5136-4adb-af27-14ff32c4c2ea"). InnerVolumeSpecName "kube-api-access-56krv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.603859 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-config-data" (OuterVolumeSpecName: "config-data") pod "d67f7045-5136-4adb-af27-14ff32c4c2ea" (UID: "d67f7045-5136-4adb-af27-14ff32c4c2ea"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.603964 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d67f7045-5136-4adb-af27-14ff32c4c2ea" (UID: "d67f7045-5136-4adb-af27-14ff32c4c2ea"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.651711 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:44:04 crc kubenswrapper[4804]: W0128 11:44:04.655268 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod99ee8dc6_b4c2_46ef_a2a5_3ba27ff2f711.slice/crio-f3e4f04cf9239f80de0615a7c82510a14c24dd1f74e0a4378ba5cd12abed76cf WatchSource:0}: Error finding container f3e4f04cf9239f80de0615a7c82510a14c24dd1f74e0a4378ba5cd12abed76cf: Status 404 returned error can't find the container with id f3e4f04cf9239f80de0615a7c82510a14c24dd1f74e0a4378ba5cd12abed76cf Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.680628 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d67f7045-5136-4adb-af27-14ff32c4c2ea-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.680656 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-56krv\" (UniqueName: \"kubernetes.io/projected/d67f7045-5136-4adb-af27-14ff32c4c2ea-kube-api-access-56krv\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.680669 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.680679 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d67f7045-5136-4adb-af27-14ff32c4c2ea-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.718354 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.826130 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.841785 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.850073 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:04 crc kubenswrapper[4804]: E0128 11:44:04.850487 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerName="nova-api-api" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.850511 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerName="nova-api-api" Jan 28 11:44:04 crc kubenswrapper[4804]: E0128 11:44:04.851197 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerName="nova-api-log" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.851281 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerName="nova-api-log" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.854951 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerName="nova-api-api" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.854984 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="d67f7045-5136-4adb-af27-14ff32c4c2ea" containerName="nova-api-log" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.858778 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.862845 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.863313 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.927379 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2" path="/var/lib/kubelet/pods/4afa58a1-e3ce-42e1-a0d7-bf0c57459ed2/volumes" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.928092 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d67f7045-5136-4adb-af27-14ff32c4c2ea" path="/var/lib/kubelet/pods/d67f7045-5136-4adb-af27-14ff32c4c2ea/volumes" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.993901 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc89087f-fead-4af8-b13c-67af8c77e7f7-logs\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.994106 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.994206 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jxws4\" (UniqueName: \"kubernetes.io/projected/fc89087f-fead-4af8-b13c-67af8c77e7f7-kube-api-access-jxws4\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:04 crc kubenswrapper[4804]: I0128 11:44:04.994257 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-config-data\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.095548 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-config-data\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.095954 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc89087f-fead-4af8-b13c-67af8c77e7f7-logs\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.096294 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.096419 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jxws4\" (UniqueName: \"kubernetes.io/projected/fc89087f-fead-4af8-b13c-67af8c77e7f7-kube-api-access-jxws4\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.096348 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc89087f-fead-4af8-b13c-67af8c77e7f7-logs\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.098159 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.101443 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.109782 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-config-data\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.113364 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jxws4\" (UniqueName: \"kubernetes.io/projected/fc89087f-fead-4af8-b13c-67af8c77e7f7-kube-api-access-jxws4\") pod \"nova-api-0\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " pod="openstack/nova-api-0" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.205120 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.502346 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711","Type":"ContainerStarted","Data":"79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1"} Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.502685 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711","Type":"ContainerStarted","Data":"f3e4f04cf9239f80de0615a7c82510a14c24dd1f74e0a4378ba5cd12abed76cf"} Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.523365 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.523348081 podStartE2EDuration="2.523348081s" podCreationTimestamp="2026-01-28 11:44:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:44:05.517788687 +0000 UTC m=+1321.312668671" watchObservedRunningTime="2026-01-28 11:44:05.523348081 +0000 UTC m=+1321.318228065" Jan 28 11:44:05 crc kubenswrapper[4804]: I0128 11:44:05.658649 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:05 crc kubenswrapper[4804]: W0128 11:44:05.659298 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfc89087f_fead_4af8_b13c_67af8c77e7f7.slice/crio-8826377baf8bb298bdd184ad91e3e80fe9c1df416ad9ff988cf49cad24cb348e WatchSource:0}: Error finding container 8826377baf8bb298bdd184ad91e3e80fe9c1df416ad9ff988cf49cad24cb348e: Status 404 returned error can't find the container with id 8826377baf8bb298bdd184ad91e3e80fe9c1df416ad9ff988cf49cad24cb348e Jan 28 11:44:06 crc kubenswrapper[4804]: I0128 11:44:06.520105 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc89087f-fead-4af8-b13c-67af8c77e7f7","Type":"ContainerStarted","Data":"3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b"} Jan 28 11:44:06 crc kubenswrapper[4804]: I0128 11:44:06.520338 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc89087f-fead-4af8-b13c-67af8c77e7f7","Type":"ContainerStarted","Data":"b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807"} Jan 28 11:44:06 crc kubenswrapper[4804]: I0128 11:44:06.520348 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc89087f-fead-4af8-b13c-67af8c77e7f7","Type":"ContainerStarted","Data":"8826377baf8bb298bdd184ad91e3e80fe9c1df416ad9ff988cf49cad24cb348e"} Jan 28 11:44:06 crc kubenswrapper[4804]: I0128 11:44:06.539738 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.539725508 podStartE2EDuration="2.539725508s" podCreationTimestamp="2026-01-28 11:44:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:44:06.538622613 +0000 UTC m=+1322.333502587" watchObservedRunningTime="2026-01-28 11:44:06.539725508 +0000 UTC m=+1322.334605492" Jan 28 11:44:06 crc kubenswrapper[4804]: I0128 11:44:06.850333 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 11:44:06 crc kubenswrapper[4804]: I0128 11:44:06.850463 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 11:44:08 crc kubenswrapper[4804]: I0128 11:44:08.669320 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:44:08 crc kubenswrapper[4804]: I0128 11:44:08.669863 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="97a6e239-25e0-4962-8c9d-4751ca2f4b1d" containerName="kube-state-metrics" containerID="cri-o://bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a" gracePeriod=30 Jan 28 11:44:08 crc kubenswrapper[4804]: I0128 11:44:08.853912 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.143794 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.157034 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.185166 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r249v\" (UniqueName: \"kubernetes.io/projected/97a6e239-25e0-4962-8c9d-4751ca2f4b1d-kube-api-access-r249v\") pod \"97a6e239-25e0-4962-8c9d-4751ca2f4b1d\" (UID: \"97a6e239-25e0-4962-8c9d-4751ca2f4b1d\") " Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.191586 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97a6e239-25e0-4962-8c9d-4751ca2f4b1d-kube-api-access-r249v" (OuterVolumeSpecName: "kube-api-access-r249v") pod "97a6e239-25e0-4962-8c9d-4751ca2f4b1d" (UID: "97a6e239-25e0-4962-8c9d-4751ca2f4b1d"). InnerVolumeSpecName "kube-api-access-r249v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.288523 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r249v\" (UniqueName: \"kubernetes.io/projected/97a6e239-25e0-4962-8c9d-4751ca2f4b1d-kube-api-access-r249v\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.552828 4804 generic.go:334] "Generic (PLEG): container finished" podID="97a6e239-25e0-4962-8c9d-4751ca2f4b1d" containerID="bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a" exitCode=2 Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.552917 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"97a6e239-25e0-4962-8c9d-4751ca2f4b1d","Type":"ContainerDied","Data":"bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a"} Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.552968 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"97a6e239-25e0-4962-8c9d-4751ca2f4b1d","Type":"ContainerDied","Data":"e28d6e15bb8b7864184a210b8a21979cfee4c6a5d5b942d21fe32b6ed7b6e02c"} Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.552989 4804 scope.go:117] "RemoveContainer" containerID="bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.553156 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.598611 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.600350 4804 scope.go:117] "RemoveContainer" containerID="bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a" Jan 28 11:44:09 crc kubenswrapper[4804]: E0128 11:44:09.607347 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a\": container with ID starting with bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a not found: ID does not exist" containerID="bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.607409 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a"} err="failed to get container status \"bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a\": rpc error: code = NotFound desc = could not find container \"bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a\": container with ID starting with bd87c6bc49e5ed43739f7cee047260a8e24aa2067cb18dd7a49810a34dcf8f3a not found: ID does not exist" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.608676 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.619985 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:44:09 crc kubenswrapper[4804]: E0128 11:44:09.620864 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97a6e239-25e0-4962-8c9d-4751ca2f4b1d" containerName="kube-state-metrics" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.620915 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="97a6e239-25e0-4962-8c9d-4751ca2f4b1d" containerName="kube-state-metrics" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.621172 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="97a6e239-25e0-4962-8c9d-4751ca2f4b1d" containerName="kube-state-metrics" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.621938 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.624144 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.624300 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.627440 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.697121 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.697315 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.697355 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s6xhn\" (UniqueName: \"kubernetes.io/projected/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-api-access-s6xhn\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.697522 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.799801 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.800183 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.800283 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.800304 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s6xhn\" (UniqueName: \"kubernetes.io/projected/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-api-access-s6xhn\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.804874 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.805293 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.813664 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.816427 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s6xhn\" (UniqueName: \"kubernetes.io/projected/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-api-access-s6xhn\") pod \"kube-state-metrics-0\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " pod="openstack/kube-state-metrics-0" Jan 28 11:44:09 crc kubenswrapper[4804]: I0128 11:44:09.953842 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 11:44:10 crc kubenswrapper[4804]: I0128 11:44:10.367839 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:10 crc kubenswrapper[4804]: I0128 11:44:10.368724 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="ceilometer-central-agent" containerID="cri-o://10a183c3b80dc4ee328a053d5a9ee94912af618329f720155eaad991e5227301" gracePeriod=30 Jan 28 11:44:10 crc kubenswrapper[4804]: I0128 11:44:10.368799 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="sg-core" containerID="cri-o://1bfe46f5b1d876490985e1a3f6d0da59f78789571798235a855e1d56bee636d6" gracePeriod=30 Jan 28 11:44:10 crc kubenswrapper[4804]: I0128 11:44:10.368806 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="proxy-httpd" containerID="cri-o://2509bf4b13f32a9d4b208c9c897e5e615b25378e75e4b250d7877c40fc99630f" gracePeriod=30 Jan 28 11:44:10 crc kubenswrapper[4804]: I0128 11:44:10.369414 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="ceilometer-notification-agent" containerID="cri-o://1d8aa855c628bc141777e622e822953c4716ead40c6674117561e04a96dece56" gracePeriod=30 Jan 28 11:44:10 crc kubenswrapper[4804]: I0128 11:44:10.420183 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:44:10 crc kubenswrapper[4804]: I0128 11:44:10.421711 4804 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 11:44:10 crc kubenswrapper[4804]: I0128 11:44:10.564205 4804 generic.go:334] "Generic (PLEG): container finished" podID="f3580297-d401-446c-818f-fbb89e50c757" containerID="1bfe46f5b1d876490985e1a3f6d0da59f78789571798235a855e1d56bee636d6" exitCode=2 Jan 28 11:44:10 crc kubenswrapper[4804]: I0128 11:44:10.564273 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3580297-d401-446c-818f-fbb89e50c757","Type":"ContainerDied","Data":"1bfe46f5b1d876490985e1a3f6d0da59f78789571798235a855e1d56bee636d6"} Jan 28 11:44:10 crc kubenswrapper[4804]: I0128 11:44:10.565800 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def","Type":"ContainerStarted","Data":"21e20525ca7a6c58cab2832c14cfe80c2d4514f39f84f4eb3108c5f05572b1bf"} Jan 28 11:44:10 crc kubenswrapper[4804]: I0128 11:44:10.928724 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97a6e239-25e0-4962-8c9d-4751ca2f4b1d" path="/var/lib/kubelet/pods/97a6e239-25e0-4962-8c9d-4751ca2f4b1d/volumes" Jan 28 11:44:11 crc kubenswrapper[4804]: I0128 11:44:11.577812 4804 generic.go:334] "Generic (PLEG): container finished" podID="f3580297-d401-446c-818f-fbb89e50c757" containerID="2509bf4b13f32a9d4b208c9c897e5e615b25378e75e4b250d7877c40fc99630f" exitCode=0 Jan 28 11:44:11 crc kubenswrapper[4804]: I0128 11:44:11.577843 4804 generic.go:334] "Generic (PLEG): container finished" podID="f3580297-d401-446c-818f-fbb89e50c757" containerID="10a183c3b80dc4ee328a053d5a9ee94912af618329f720155eaad991e5227301" exitCode=0 Jan 28 11:44:11 crc kubenswrapper[4804]: I0128 11:44:11.577854 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3580297-d401-446c-818f-fbb89e50c757","Type":"ContainerDied","Data":"2509bf4b13f32a9d4b208c9c897e5e615b25378e75e4b250d7877c40fc99630f"} Jan 28 11:44:11 crc kubenswrapper[4804]: I0128 11:44:11.577920 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3580297-d401-446c-818f-fbb89e50c757","Type":"ContainerDied","Data":"10a183c3b80dc4ee328a053d5a9ee94912af618329f720155eaad991e5227301"} Jan 28 11:44:11 crc kubenswrapper[4804]: I0128 11:44:11.851068 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 11:44:11 crc kubenswrapper[4804]: I0128 11:44:11.851119 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.582030 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.582413 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.591701 4804 generic.go:334] "Generic (PLEG): container finished" podID="f3580297-d401-446c-818f-fbb89e50c757" containerID="1d8aa855c628bc141777e622e822953c4716ead40c6674117561e04a96dece56" exitCode=0 Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.591758 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3580297-d401-446c-818f-fbb89e50c757","Type":"ContainerDied","Data":"1d8aa855c628bc141777e622e822953c4716ead40c6674117561e04a96dece56"} Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.593121 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def","Type":"ContainerStarted","Data":"ccddc2c43c4ec70519371e0d1f04a70d45a5a33973b05eef166b2e2189b30710"} Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.593261 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.619492 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=2.293222735 podStartE2EDuration="3.619474955s" podCreationTimestamp="2026-01-28 11:44:09 +0000 UTC" firstStartedPulling="2026-01-28 11:44:10.421488098 +0000 UTC m=+1326.216368082" lastFinishedPulling="2026-01-28 11:44:11.747740318 +0000 UTC m=+1327.542620302" observedRunningTime="2026-01-28 11:44:12.607631164 +0000 UTC m=+1328.402511148" watchObservedRunningTime="2026-01-28 11:44:12.619474955 +0000 UTC m=+1328.414354929" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.866212 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.866213 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.886192 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.900827 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-config-data\") pod \"f3580297-d401-446c-818f-fbb89e50c757\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.900930 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-run-httpd\") pod \"f3580297-d401-446c-818f-fbb89e50c757\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.901159 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmm2q\" (UniqueName: \"kubernetes.io/projected/f3580297-d401-446c-818f-fbb89e50c757-kube-api-access-rmm2q\") pod \"f3580297-d401-446c-818f-fbb89e50c757\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.901258 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-scripts\") pod \"f3580297-d401-446c-818f-fbb89e50c757\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.901410 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f3580297-d401-446c-818f-fbb89e50c757" (UID: "f3580297-d401-446c-818f-fbb89e50c757"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.901534 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-combined-ca-bundle\") pod \"f3580297-d401-446c-818f-fbb89e50c757\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.901613 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-sg-core-conf-yaml\") pod \"f3580297-d401-446c-818f-fbb89e50c757\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.901656 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-log-httpd\") pod \"f3580297-d401-446c-818f-fbb89e50c757\" (UID: \"f3580297-d401-446c-818f-fbb89e50c757\") " Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.902432 4804 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.903456 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f3580297-d401-446c-818f-fbb89e50c757" (UID: "f3580297-d401-446c-818f-fbb89e50c757"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.907808 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-scripts" (OuterVolumeSpecName: "scripts") pod "f3580297-d401-446c-818f-fbb89e50c757" (UID: "f3580297-d401-446c-818f-fbb89e50c757"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.908745 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3580297-d401-446c-818f-fbb89e50c757-kube-api-access-rmm2q" (OuterVolumeSpecName: "kube-api-access-rmm2q") pod "f3580297-d401-446c-818f-fbb89e50c757" (UID: "f3580297-d401-446c-818f-fbb89e50c757"). InnerVolumeSpecName "kube-api-access-rmm2q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:12 crc kubenswrapper[4804]: I0128 11:44:12.990721 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f3580297-d401-446c-818f-fbb89e50c757" (UID: "f3580297-d401-446c-818f-fbb89e50c757"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.005264 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.005444 4804 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.005558 4804 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f3580297-d401-446c-818f-fbb89e50c757-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.005692 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmm2q\" (UniqueName: \"kubernetes.io/projected/f3580297-d401-446c-818f-fbb89e50c757-kube-api-access-rmm2q\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.040093 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f3580297-d401-446c-818f-fbb89e50c757" (UID: "f3580297-d401-446c-818f-fbb89e50c757"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.061962 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-config-data" (OuterVolumeSpecName: "config-data") pod "f3580297-d401-446c-818f-fbb89e50c757" (UID: "f3580297-d401-446c-818f-fbb89e50c757"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.107602 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.107643 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3580297-d401-446c-818f-fbb89e50c757-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.607032 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f3580297-d401-446c-818f-fbb89e50c757","Type":"ContainerDied","Data":"14a4c38d4f2d56c74c58972e9ed2fa41c69a7c52d8e83ec184580c877203f8a9"} Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.607112 4804 scope.go:117] "RemoveContainer" containerID="2509bf4b13f32a9d4b208c9c897e5e615b25378e75e4b250d7877c40fc99630f" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.607191 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.645152 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.645157 4804 scope.go:117] "RemoveContainer" containerID="1bfe46f5b1d876490985e1a3f6d0da59f78789571798235a855e1d56bee636d6" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.664269 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.673935 4804 scope.go:117] "RemoveContainer" containerID="1d8aa855c628bc141777e622e822953c4716ead40c6674117561e04a96dece56" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.687786 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:13 crc kubenswrapper[4804]: E0128 11:44:13.688228 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="sg-core" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.688240 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="sg-core" Jan 28 11:44:13 crc kubenswrapper[4804]: E0128 11:44:13.688256 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="proxy-httpd" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.688262 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="proxy-httpd" Jan 28 11:44:13 crc kubenswrapper[4804]: E0128 11:44:13.688275 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="ceilometer-notification-agent" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.688282 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="ceilometer-notification-agent" Jan 28 11:44:13 crc kubenswrapper[4804]: E0128 11:44:13.688304 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="ceilometer-central-agent" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.688310 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="ceilometer-central-agent" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.688489 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="sg-core" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.688506 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="ceilometer-notification-agent" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.688516 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="proxy-httpd" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.688525 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3580297-d401-446c-818f-fbb89e50c757" containerName="ceilometer-central-agent" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.690916 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.696784 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.697032 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.697222 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.697261 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.716953 4804 scope.go:117] "RemoveContainer" containerID="10a183c3b80dc4ee328a053d5a9ee94912af618329f720155eaad991e5227301" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.724058 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-log-httpd\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.724203 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.724923 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-run-httpd\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.724954 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chjp9\" (UniqueName: \"kubernetes.io/projected/88f7d3c2-ab36-467f-8ad5-0e899f804eca-kube-api-access-chjp9\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.725057 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-config-data\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.725199 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.725237 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-scripts\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.725829 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.828992 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.829059 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-scripts\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.829098 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.829157 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-log-httpd\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.829177 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.829251 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-run-httpd\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.829281 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-chjp9\" (UniqueName: \"kubernetes.io/projected/88f7d3c2-ab36-467f-8ad5-0e899f804eca-kube-api-access-chjp9\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.829322 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-config-data\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.830357 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-log-httpd\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.830663 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-run-httpd\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.833685 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.833806 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.835199 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.835415 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-config-data\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.840322 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-scripts\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:13 crc kubenswrapper[4804]: I0128 11:44:13.858058 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-chjp9\" (UniqueName: \"kubernetes.io/projected/88f7d3c2-ab36-467f-8ad5-0e899f804eca-kube-api-access-chjp9\") pod \"ceilometer-0\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " pod="openstack/ceilometer-0" Jan 28 11:44:14 crc kubenswrapper[4804]: I0128 11:44:14.020970 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:14 crc kubenswrapper[4804]: I0128 11:44:14.143940 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 11:44:14 crc kubenswrapper[4804]: I0128 11:44:14.170214 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 11:44:14 crc kubenswrapper[4804]: I0128 11:44:14.463437 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:14 crc kubenswrapper[4804]: W0128 11:44:14.464649 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88f7d3c2_ab36_467f_8ad5_0e899f804eca.slice/crio-5b68752056e51dbeeb9e5943373f93474614d0a923a621405962becd6490ba78 WatchSource:0}: Error finding container 5b68752056e51dbeeb9e5943373f93474614d0a923a621405962becd6490ba78: Status 404 returned error can't find the container with id 5b68752056e51dbeeb9e5943373f93474614d0a923a621405962becd6490ba78 Jan 28 11:44:14 crc kubenswrapper[4804]: I0128 11:44:14.619161 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88f7d3c2-ab36-467f-8ad5-0e899f804eca","Type":"ContainerStarted","Data":"5b68752056e51dbeeb9e5943373f93474614d0a923a621405962becd6490ba78"} Jan 28 11:44:14 crc kubenswrapper[4804]: I0128 11:44:14.646129 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 11:44:14 crc kubenswrapper[4804]: I0128 11:44:14.925812 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3580297-d401-446c-818f-fbb89e50c757" path="/var/lib/kubelet/pods/f3580297-d401-446c-818f-fbb89e50c757/volumes" Jan 28 11:44:15 crc kubenswrapper[4804]: I0128 11:44:15.206593 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 11:44:15 crc kubenswrapper[4804]: I0128 11:44:15.207805 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 11:44:16 crc kubenswrapper[4804]: I0128 11:44:16.288094 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 11:44:16 crc kubenswrapper[4804]: I0128 11:44:16.288098 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.198:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 28 11:44:16 crc kubenswrapper[4804]: I0128 11:44:16.636256 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88f7d3c2-ab36-467f-8ad5-0e899f804eca","Type":"ContainerStarted","Data":"6ded14998fd4372415a5c145385be7bd59ee9decf6e259549de3128029c8700a"} Jan 28 11:44:18 crc kubenswrapper[4804]: I0128 11:44:18.673906 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88f7d3c2-ab36-467f-8ad5-0e899f804eca","Type":"ContainerStarted","Data":"403358f8f419c7811efcc028282e1eb89fb15ba997a4f33821c45033f374f2b8"} Jan 28 11:44:19 crc kubenswrapper[4804]: I0128 11:44:19.686203 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88f7d3c2-ab36-467f-8ad5-0e899f804eca","Type":"ContainerStarted","Data":"b2a363c153285938800237b851bb51663040772805a005ecd0b0a83e28b140b9"} Jan 28 11:44:19 crc kubenswrapper[4804]: I0128 11:44:19.968181 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Jan 28 11:44:21 crc kubenswrapper[4804]: I0128 11:44:21.856248 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 11:44:21 crc kubenswrapper[4804]: I0128 11:44:21.856919 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 11:44:21 crc kubenswrapper[4804]: I0128 11:44:21.860544 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 11:44:21 crc kubenswrapper[4804]: I0128 11:44:21.862866 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 11:44:22 crc kubenswrapper[4804]: I0128 11:44:22.713991 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88f7d3c2-ab36-467f-8ad5-0e899f804eca","Type":"ContainerStarted","Data":"71f4e929646b37e6528bf1990153715e310d17c0e8eb1c00810ed174d44e4221"} Jan 28 11:44:22 crc kubenswrapper[4804]: I0128 11:44:22.716218 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 11:44:22 crc kubenswrapper[4804]: I0128 11:44:22.716237 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"84b18213-5ffe-40a4-b2f7-a8bb117d9a79","Type":"ContainerDied","Data":"a3ec7c22b5141bccbf8d04497f2060dbc86f6d11337e8f669677bbb61ab18959"} Jan 28 11:44:22 crc kubenswrapper[4804]: I0128 11:44:22.715998 4804 generic.go:334] "Generic (PLEG): container finished" podID="84b18213-5ffe-40a4-b2f7-a8bb117d9a79" containerID="a3ec7c22b5141bccbf8d04497f2060dbc86f6d11337e8f669677bbb61ab18959" exitCode=137 Jan 28 11:44:22 crc kubenswrapper[4804]: I0128 11:44:22.738121 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.509004686 podStartE2EDuration="9.738101813s" podCreationTimestamp="2026-01-28 11:44:13 +0000 UTC" firstStartedPulling="2026-01-28 11:44:14.466747589 +0000 UTC m=+1330.261627573" lastFinishedPulling="2026-01-28 11:44:21.695844716 +0000 UTC m=+1337.490724700" observedRunningTime="2026-01-28 11:44:22.73129537 +0000 UTC m=+1338.526175364" watchObservedRunningTime="2026-01-28 11:44:22.738101813 +0000 UTC m=+1338.532981787" Jan 28 11:44:22 crc kubenswrapper[4804]: I0128 11:44:22.848729 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.017990 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-config-data\") pod \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.018083 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z2dpn\" (UniqueName: \"kubernetes.io/projected/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-kube-api-access-z2dpn\") pod \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.018250 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-combined-ca-bundle\") pod \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\" (UID: \"84b18213-5ffe-40a4-b2f7-a8bb117d9a79\") " Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.022914 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-kube-api-access-z2dpn" (OuterVolumeSpecName: "kube-api-access-z2dpn") pod "84b18213-5ffe-40a4-b2f7-a8bb117d9a79" (UID: "84b18213-5ffe-40a4-b2f7-a8bb117d9a79"). InnerVolumeSpecName "kube-api-access-z2dpn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.044356 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "84b18213-5ffe-40a4-b2f7-a8bb117d9a79" (UID: "84b18213-5ffe-40a4-b2f7-a8bb117d9a79"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.054825 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-config-data" (OuterVolumeSpecName: "config-data") pod "84b18213-5ffe-40a4-b2f7-a8bb117d9a79" (UID: "84b18213-5ffe-40a4-b2f7-a8bb117d9a79"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.120522 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.120789 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z2dpn\" (UniqueName: \"kubernetes.io/projected/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-kube-api-access-z2dpn\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.120867 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/84b18213-5ffe-40a4-b2f7-a8bb117d9a79-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.735475 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.735706 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"84b18213-5ffe-40a4-b2f7-a8bb117d9a79","Type":"ContainerDied","Data":"30cd19d729fe0a8f365f4576d67a9396141f36b3555091744e62104b74b1d641"} Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.736498 4804 scope.go:117] "RemoveContainer" containerID="a3ec7c22b5141bccbf8d04497f2060dbc86f6d11337e8f669677bbb61ab18959" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.788395 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.805169 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.816104 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:44:23 crc kubenswrapper[4804]: E0128 11:44:23.816567 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84b18213-5ffe-40a4-b2f7-a8bb117d9a79" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.816585 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="84b18213-5ffe-40a4-b2f7-a8bb117d9a79" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.816798 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="84b18213-5ffe-40a4-b2f7-a8bb117d9a79" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.818355 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.825189 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.825679 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.825868 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.842699 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.941962 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.942048 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.942076 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ps6sv\" (UniqueName: \"kubernetes.io/projected/b390f543-98da-46ea-b3b9-f68c09d94c03-kube-api-access-ps6sv\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.942477 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:23 crc kubenswrapper[4804]: I0128 11:44:23.942526 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.044137 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.044203 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.044252 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.044330 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.044360 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ps6sv\" (UniqueName: \"kubernetes.io/projected/b390f543-98da-46ea-b3b9-f68c09d94c03-kube-api-access-ps6sv\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.049635 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.050063 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.050413 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.050857 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.086153 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ps6sv\" (UniqueName: \"kubernetes.io/projected/b390f543-98da-46ea-b3b9-f68c09d94c03-kube-api-access-ps6sv\") pod \"nova-cell1-novncproxy-0\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.145450 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.584117 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.744185 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b390f543-98da-46ea-b3b9-f68c09d94c03","Type":"ContainerStarted","Data":"3d6b0e8a60f6d64a7898369a58401894b066ffaf5a9e53838f90370bc8ff4841"} Jan 28 11:44:24 crc kubenswrapper[4804]: I0128 11:44:24.927725 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84b18213-5ffe-40a4-b2f7-a8bb117d9a79" path="/var/lib/kubelet/pods/84b18213-5ffe-40a4-b2f7-a8bb117d9a79/volumes" Jan 28 11:44:25 crc kubenswrapper[4804]: I0128 11:44:25.209919 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 11:44:25 crc kubenswrapper[4804]: I0128 11:44:25.210511 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 11:44:25 crc kubenswrapper[4804]: I0128 11:44:25.211617 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 11:44:25 crc kubenswrapper[4804]: I0128 11:44:25.214363 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 11:44:25 crc kubenswrapper[4804]: I0128 11:44:25.757773 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b390f543-98da-46ea-b3b9-f68c09d94c03","Type":"ContainerStarted","Data":"67b5e53f1eb1c67a490461931a62e093efa88d74afa9352d1282f6ea7d2e449a"} Jan 28 11:44:25 crc kubenswrapper[4804]: I0128 11:44:25.758539 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 11:44:25 crc kubenswrapper[4804]: I0128 11:44:25.765621 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 11:44:25 crc kubenswrapper[4804]: I0128 11:44:25.779584 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.779565671 podStartE2EDuration="2.779565671s" podCreationTimestamp="2026-01-28 11:44:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:44:25.776692491 +0000 UTC m=+1341.571572475" watchObservedRunningTime="2026-01-28 11:44:25.779565671 +0000 UTC m=+1341.574445655" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.008319 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-j9ld2"] Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.012602 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.026719 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-j9ld2"] Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.192041 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.192087 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-config\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.192111 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.192280 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mpf5\" (UniqueName: \"kubernetes.io/projected/f7cab05f-efa6-4a74-920b-96f8f30f1736-kube-api-access-4mpf5\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.192454 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.192524 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.294543 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.294590 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-config\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.294613 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.294651 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mpf5\" (UniqueName: \"kubernetes.io/projected/f7cab05f-efa6-4a74-920b-96f8f30f1736-kube-api-access-4mpf5\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.294696 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.294725 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.295436 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-config\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.295471 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-sb\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.296011 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-nb\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.296213 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-svc\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.296776 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-swift-storage-0\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.315841 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mpf5\" (UniqueName: \"kubernetes.io/projected/f7cab05f-efa6-4a74-920b-96f8f30f1736-kube-api-access-4mpf5\") pod \"dnsmasq-dns-5c7b6c5df9-j9ld2\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.334326 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:26 crc kubenswrapper[4804]: I0128 11:44:26.826524 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-j9ld2"] Jan 28 11:44:26 crc kubenswrapper[4804]: W0128 11:44:26.836112 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7cab05f_efa6_4a74_920b_96f8f30f1736.slice/crio-02c33a94fee5850cffdcc1376e17adfb105d8ad41566dcf54330f1591b79ad5e WatchSource:0}: Error finding container 02c33a94fee5850cffdcc1376e17adfb105d8ad41566dcf54330f1591b79ad5e: Status 404 returned error can't find the container with id 02c33a94fee5850cffdcc1376e17adfb105d8ad41566dcf54330f1591b79ad5e Jan 28 11:44:27 crc kubenswrapper[4804]: I0128 11:44:27.778212 4804 generic.go:334] "Generic (PLEG): container finished" podID="f7cab05f-efa6-4a74-920b-96f8f30f1736" containerID="2cf37cb975241a8023292503844e50e2fd76dae6622e27d3a7bdc8476283ee2c" exitCode=0 Jan 28 11:44:27 crc kubenswrapper[4804]: I0128 11:44:27.779826 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" event={"ID":"f7cab05f-efa6-4a74-920b-96f8f30f1736","Type":"ContainerDied","Data":"2cf37cb975241a8023292503844e50e2fd76dae6622e27d3a7bdc8476283ee2c"} Jan 28 11:44:27 crc kubenswrapper[4804]: I0128 11:44:27.779859 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" event={"ID":"f7cab05f-efa6-4a74-920b-96f8f30f1736","Type":"ContainerStarted","Data":"02c33a94fee5850cffdcc1376e17adfb105d8ad41566dcf54330f1591b79ad5e"} Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.073389 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.073644 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="ceilometer-central-agent" containerID="cri-o://6ded14998fd4372415a5c145385be7bd59ee9decf6e259549de3128029c8700a" gracePeriod=30 Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.073875 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="proxy-httpd" containerID="cri-o://71f4e929646b37e6528bf1990153715e310d17c0e8eb1c00810ed174d44e4221" gracePeriod=30 Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.074109 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="ceilometer-notification-agent" containerID="cri-o://403358f8f419c7811efcc028282e1eb89fb15ba997a4f33821c45033f374f2b8" gracePeriod=30 Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.074161 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="sg-core" containerID="cri-o://b2a363c153285938800237b851bb51663040772805a005ecd0b0a83e28b140b9" gracePeriod=30 Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.488093 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.796645 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" event={"ID":"f7cab05f-efa6-4a74-920b-96f8f30f1736","Type":"ContainerStarted","Data":"91cc51ff2b7594ba6b7c5b83ef291bdad1767dd300aa27e2d6fe9a547161ad93"} Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.797095 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.800093 4804 generic.go:334] "Generic (PLEG): container finished" podID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerID="71f4e929646b37e6528bf1990153715e310d17c0e8eb1c00810ed174d44e4221" exitCode=0 Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.800124 4804 generic.go:334] "Generic (PLEG): container finished" podID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerID="b2a363c153285938800237b851bb51663040772805a005ecd0b0a83e28b140b9" exitCode=2 Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.800133 4804 generic.go:334] "Generic (PLEG): container finished" podID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerID="403358f8f419c7811efcc028282e1eb89fb15ba997a4f33821c45033f374f2b8" exitCode=0 Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.800143 4804 generic.go:334] "Generic (PLEG): container finished" podID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerID="6ded14998fd4372415a5c145385be7bd59ee9decf6e259549de3128029c8700a" exitCode=0 Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.800254 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88f7d3c2-ab36-467f-8ad5-0e899f804eca","Type":"ContainerDied","Data":"71f4e929646b37e6528bf1990153715e310d17c0e8eb1c00810ed174d44e4221"} Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.800289 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88f7d3c2-ab36-467f-8ad5-0e899f804eca","Type":"ContainerDied","Data":"b2a363c153285938800237b851bb51663040772805a005ecd0b0a83e28b140b9"} Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.800302 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88f7d3c2-ab36-467f-8ad5-0e899f804eca","Type":"ContainerDied","Data":"403358f8f419c7811efcc028282e1eb89fb15ba997a4f33821c45033f374f2b8"} Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.800314 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88f7d3c2-ab36-467f-8ad5-0e899f804eca","Type":"ContainerDied","Data":"6ded14998fd4372415a5c145385be7bd59ee9decf6e259549de3128029c8700a"} Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.800340 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerName="nova-api-log" containerID="cri-o://b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807" gracePeriod=30 Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.800545 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerName="nova-api-api" containerID="cri-o://3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b" gracePeriod=30 Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.832941 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" podStartSLOduration=3.832925562 podStartE2EDuration="3.832925562s" podCreationTimestamp="2026-01-28 11:44:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:44:28.820250286 +0000 UTC m=+1344.615130270" watchObservedRunningTime="2026-01-28 11:44:28.832925562 +0000 UTC m=+1344.627805546" Jan 28 11:44:28 crc kubenswrapper[4804]: I0128 11:44:28.911728 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.049257 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-sg-core-conf-yaml\") pod \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.049327 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-log-httpd\") pod \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.049493 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-run-httpd\") pod \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.049630 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-combined-ca-bundle\") pod \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.049670 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-ceilometer-tls-certs\") pod \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.049700 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-scripts\") pod \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.049715 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-config-data\") pod \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.049739 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-chjp9\" (UniqueName: \"kubernetes.io/projected/88f7d3c2-ab36-467f-8ad5-0e899f804eca-kube-api-access-chjp9\") pod \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\" (UID: \"88f7d3c2-ab36-467f-8ad5-0e899f804eca\") " Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.049770 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "88f7d3c2-ab36-467f-8ad5-0e899f804eca" (UID: "88f7d3c2-ab36-467f-8ad5-0e899f804eca"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.049838 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "88f7d3c2-ab36-467f-8ad5-0e899f804eca" (UID: "88f7d3c2-ab36-467f-8ad5-0e899f804eca"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.050246 4804 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.050263 4804 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/88f7d3c2-ab36-467f-8ad5-0e899f804eca-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.058035 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-scripts" (OuterVolumeSpecName: "scripts") pod "88f7d3c2-ab36-467f-8ad5-0e899f804eca" (UID: "88f7d3c2-ab36-467f-8ad5-0e899f804eca"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.058056 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88f7d3c2-ab36-467f-8ad5-0e899f804eca-kube-api-access-chjp9" (OuterVolumeSpecName: "kube-api-access-chjp9") pod "88f7d3c2-ab36-467f-8ad5-0e899f804eca" (UID: "88f7d3c2-ab36-467f-8ad5-0e899f804eca"). InnerVolumeSpecName "kube-api-access-chjp9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.077529 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "88f7d3c2-ab36-467f-8ad5-0e899f804eca" (UID: "88f7d3c2-ab36-467f-8ad5-0e899f804eca"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.121572 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "88f7d3c2-ab36-467f-8ad5-0e899f804eca" (UID: "88f7d3c2-ab36-467f-8ad5-0e899f804eca"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.136225 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "88f7d3c2-ab36-467f-8ad5-0e899f804eca" (UID: "88f7d3c2-ab36-467f-8ad5-0e899f804eca"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.146320 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.152274 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.152307 4804 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.152319 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.152328 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-chjp9\" (UniqueName: \"kubernetes.io/projected/88f7d3c2-ab36-467f-8ad5-0e899f804eca-kube-api-access-chjp9\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.152337 4804 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.155700 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-config-data" (OuterVolumeSpecName: "config-data") pod "88f7d3c2-ab36-467f-8ad5-0e899f804eca" (UID: "88f7d3c2-ab36-467f-8ad5-0e899f804eca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.253976 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/88f7d3c2-ab36-467f-8ad5-0e899f804eca-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.809474 4804 generic.go:334] "Generic (PLEG): container finished" podID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerID="b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807" exitCode=143 Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.809532 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc89087f-fead-4af8-b13c-67af8c77e7f7","Type":"ContainerDied","Data":"b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807"} Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.812100 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"88f7d3c2-ab36-467f-8ad5-0e899f804eca","Type":"ContainerDied","Data":"5b68752056e51dbeeb9e5943373f93474614d0a923a621405962becd6490ba78"} Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.812145 4804 scope.go:117] "RemoveContainer" containerID="71f4e929646b37e6528bf1990153715e310d17c0e8eb1c00810ed174d44e4221" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.812116 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.854144 4804 scope.go:117] "RemoveContainer" containerID="b2a363c153285938800237b851bb51663040772805a005ecd0b0a83e28b140b9" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.869144 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.878999 4804 scope.go:117] "RemoveContainer" containerID="403358f8f419c7811efcc028282e1eb89fb15ba997a4f33821c45033f374f2b8" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.880006 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.897398 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:29 crc kubenswrapper[4804]: E0128 11:44:29.898126 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="proxy-httpd" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.898224 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="proxy-httpd" Jan 28 11:44:29 crc kubenswrapper[4804]: E0128 11:44:29.898315 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="ceilometer-notification-agent" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.898393 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="ceilometer-notification-agent" Jan 28 11:44:29 crc kubenswrapper[4804]: E0128 11:44:29.898480 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="ceilometer-central-agent" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.898563 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="ceilometer-central-agent" Jan 28 11:44:29 crc kubenswrapper[4804]: E0128 11:44:29.898651 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="sg-core" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.898725 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="sg-core" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.899054 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="ceilometer-notification-agent" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.899162 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="proxy-httpd" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.899252 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="ceilometer-central-agent" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.899342 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" containerName="sg-core" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.901418 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.903728 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.904052 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.904433 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.908049 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.908864 4804 scope.go:117] "RemoveContainer" containerID="6ded14998fd4372415a5c145385be7bd59ee9decf6e259549de3128029c8700a" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.967323 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.967373 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.967415 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hldj2\" (UniqueName: \"kubernetes.io/projected/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-kube-api-access-hldj2\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.967449 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-run-httpd\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.967479 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-scripts\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.967610 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.967649 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-log-httpd\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.967734 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-config-data\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:29 crc kubenswrapper[4804]: I0128 11:44:29.997677 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:29 crc kubenswrapper[4804]: E0128 11:44:29.998531 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[ceilometer-tls-certs combined-ca-bundle config-data kube-api-access-hldj2 log-httpd run-httpd scripts sg-core-conf-yaml], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/ceilometer-0" podUID="9716bec9-6c7d-49e3-8c79-ba4c723d8be9" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.069432 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hldj2\" (UniqueName: \"kubernetes.io/projected/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-kube-api-access-hldj2\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.069487 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-run-httpd\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.069541 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-scripts\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.070060 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-run-httpd\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.071798 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.072269 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-log-httpd\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.072395 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-config-data\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.072602 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.072635 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.074195 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-log-httpd\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.076651 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-scripts\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.079185 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-config-data\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.079327 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.084639 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.085915 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hldj2\" (UniqueName: \"kubernetes.io/projected/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-kube-api-access-hldj2\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.088460 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.821831 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.834540 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.885773 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-config-data\") pod \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.886032 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hldj2\" (UniqueName: \"kubernetes.io/projected/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-kube-api-access-hldj2\") pod \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.886129 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-combined-ca-bundle\") pod \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.886167 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-run-httpd\") pod \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.886219 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-sg-core-conf-yaml\") pod \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.886252 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-scripts\") pod \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.886297 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-ceilometer-tls-certs\") pod \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.886320 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-log-httpd\") pod \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\" (UID: \"9716bec9-6c7d-49e3-8c79-ba4c723d8be9\") " Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.886927 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9716bec9-6c7d-49e3-8c79-ba4c723d8be9" (UID: "9716bec9-6c7d-49e3-8c79-ba4c723d8be9"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.886941 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9716bec9-6c7d-49e3-8c79-ba4c723d8be9" (UID: "9716bec9-6c7d-49e3-8c79-ba4c723d8be9"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.891572 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-scripts" (OuterVolumeSpecName: "scripts") pod "9716bec9-6c7d-49e3-8c79-ba4c723d8be9" (UID: "9716bec9-6c7d-49e3-8c79-ba4c723d8be9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.891714 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-kube-api-access-hldj2" (OuterVolumeSpecName: "kube-api-access-hldj2") pod "9716bec9-6c7d-49e3-8c79-ba4c723d8be9" (UID: "9716bec9-6c7d-49e3-8c79-ba4c723d8be9"). InnerVolumeSpecName "kube-api-access-hldj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.891731 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-config-data" (OuterVolumeSpecName: "config-data") pod "9716bec9-6c7d-49e3-8c79-ba4c723d8be9" (UID: "9716bec9-6c7d-49e3-8c79-ba4c723d8be9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.892120 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9716bec9-6c7d-49e3-8c79-ba4c723d8be9" (UID: "9716bec9-6c7d-49e3-8c79-ba4c723d8be9"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.892536 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9716bec9-6c7d-49e3-8c79-ba4c723d8be9" (UID: "9716bec9-6c7d-49e3-8c79-ba4c723d8be9"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.893066 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9716bec9-6c7d-49e3-8c79-ba4c723d8be9" (UID: "9716bec9-6c7d-49e3-8c79-ba4c723d8be9"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.927864 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88f7d3c2-ab36-467f-8ad5-0e899f804eca" path="/var/lib/kubelet/pods/88f7d3c2-ab36-467f-8ad5-0e899f804eca/volumes" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.990002 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hldj2\" (UniqueName: \"kubernetes.io/projected/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-kube-api-access-hldj2\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.990038 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.990051 4804 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.990062 4804 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.990072 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.990113 4804 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.990124 4804 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:30 crc kubenswrapper[4804]: I0128 11:44:30.990134 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9716bec9-6c7d-49e3-8c79-ba4c723d8be9-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:31 crc kubenswrapper[4804]: I0128 11:44:31.833958 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:31 crc kubenswrapper[4804]: I0128 11:44:31.892866 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:31 crc kubenswrapper[4804]: I0128 11:44:31.905083 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:31 crc kubenswrapper[4804]: I0128 11:44:31.919168 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:31 crc kubenswrapper[4804]: I0128 11:44:31.921996 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:31 crc kubenswrapper[4804]: I0128 11:44:31.924138 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Jan 28 11:44:31 crc kubenswrapper[4804]: I0128 11:44:31.924335 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Jan 28 11:44:31 crc kubenswrapper[4804]: I0128 11:44:31.925084 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Jan 28 11:44:31 crc kubenswrapper[4804]: I0128 11:44:31.929542 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.010394 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.010491 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.010601 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-scripts\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.010636 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-log-httpd\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.011121 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-run-httpd\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.011173 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-config-data\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.011295 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.011404 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldd52\" (UniqueName: \"kubernetes.io/projected/90f5a2ef-6224-4af8-8bba-32c689a960f1-kube-api-access-ldd52\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.117851 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-run-httpd\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.117920 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-config-data\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.117954 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.117996 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ldd52\" (UniqueName: \"kubernetes.io/projected/90f5a2ef-6224-4af8-8bba-32c689a960f1-kube-api-access-ldd52\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.118058 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.118096 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.118129 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-scripts\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.118152 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-log-httpd\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.118319 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-run-httpd\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.118508 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-log-httpd\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.123286 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.123374 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.134348 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-config-data\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.134545 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-scripts\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.135608 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.137489 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ldd52\" (UniqueName: \"kubernetes.io/projected/90f5a2ef-6224-4af8-8bba-32c689a960f1-kube-api-access-ldd52\") pod \"ceilometer-0\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.241674 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.384835 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.524969 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-combined-ca-bundle\") pod \"fc89087f-fead-4af8-b13c-67af8c77e7f7\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.525125 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-config-data\") pod \"fc89087f-fead-4af8-b13c-67af8c77e7f7\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.525156 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc89087f-fead-4af8-b13c-67af8c77e7f7-logs\") pod \"fc89087f-fead-4af8-b13c-67af8c77e7f7\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.525187 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jxws4\" (UniqueName: \"kubernetes.io/projected/fc89087f-fead-4af8-b13c-67af8c77e7f7-kube-api-access-jxws4\") pod \"fc89087f-fead-4af8-b13c-67af8c77e7f7\" (UID: \"fc89087f-fead-4af8-b13c-67af8c77e7f7\") " Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.528612 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fc89087f-fead-4af8-b13c-67af8c77e7f7-logs" (OuterVolumeSpecName: "logs") pod "fc89087f-fead-4af8-b13c-67af8c77e7f7" (UID: "fc89087f-fead-4af8-b13c-67af8c77e7f7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.533075 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc89087f-fead-4af8-b13c-67af8c77e7f7-kube-api-access-jxws4" (OuterVolumeSpecName: "kube-api-access-jxws4") pod "fc89087f-fead-4af8-b13c-67af8c77e7f7" (UID: "fc89087f-fead-4af8-b13c-67af8c77e7f7"). InnerVolumeSpecName "kube-api-access-jxws4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.557865 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fc89087f-fead-4af8-b13c-67af8c77e7f7" (UID: "fc89087f-fead-4af8-b13c-67af8c77e7f7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.561224 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-config-data" (OuterVolumeSpecName: "config-data") pod "fc89087f-fead-4af8-b13c-67af8c77e7f7" (UID: "fc89087f-fead-4af8-b13c-67af8c77e7f7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.628073 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.628408 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fc89087f-fead-4af8-b13c-67af8c77e7f7-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.628420 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fc89087f-fead-4af8-b13c-67af8c77e7f7-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.628428 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jxws4\" (UniqueName: \"kubernetes.io/projected/fc89087f-fead-4af8-b13c-67af8c77e7f7-kube-api-access-jxws4\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.720012 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:44:32 crc kubenswrapper[4804]: W0128 11:44:32.735896 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90f5a2ef_6224_4af8_8bba_32c689a960f1.slice/crio-84fd0aed08998b6cb545affdc4c5c0c2a24e6d8450e334aba33aae6ec80b288a WatchSource:0}: Error finding container 84fd0aed08998b6cb545affdc4c5c0c2a24e6d8450e334aba33aae6ec80b288a: Status 404 returned error can't find the container with id 84fd0aed08998b6cb545affdc4c5c0c2a24e6d8450e334aba33aae6ec80b288a Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.844819 4804 generic.go:334] "Generic (PLEG): container finished" podID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerID="3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b" exitCode=0 Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.844910 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc89087f-fead-4af8-b13c-67af8c77e7f7","Type":"ContainerDied","Data":"3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b"} Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.844945 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"fc89087f-fead-4af8-b13c-67af8c77e7f7","Type":"ContainerDied","Data":"8826377baf8bb298bdd184ad91e3e80fe9c1df416ad9ff988cf49cad24cb348e"} Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.844967 4804 scope.go:117] "RemoveContainer" containerID="3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.845091 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.849455 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90f5a2ef-6224-4af8-8bba-32c689a960f1","Type":"ContainerStarted","Data":"84fd0aed08998b6cb545affdc4c5c0c2a24e6d8450e334aba33aae6ec80b288a"} Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.876292 4804 scope.go:117] "RemoveContainer" containerID="b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.879307 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.897975 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.900576 4804 scope.go:117] "RemoveContainer" containerID="3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b" Jan 28 11:44:32 crc kubenswrapper[4804]: E0128 11:44:32.902114 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b\": container with ID starting with 3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b not found: ID does not exist" containerID="3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.902141 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b"} err="failed to get container status \"3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b\": rpc error: code = NotFound desc = could not find container \"3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b\": container with ID starting with 3c01be2844b8992ffeee1f69c118bbf3492cb9dca9cd1b25aea8de0a920e664b not found: ID does not exist" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.902160 4804 scope.go:117] "RemoveContainer" containerID="b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807" Jan 28 11:44:32 crc kubenswrapper[4804]: E0128 11:44:32.902384 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807\": container with ID starting with b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807 not found: ID does not exist" containerID="b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.902400 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807"} err="failed to get container status \"b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807\": rpc error: code = NotFound desc = could not find container \"b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807\": container with ID starting with b561124c5300237b142377aa10852f8477b0e364b5b3b13db419528fcae30807 not found: ID does not exist" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.909221 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:32 crc kubenswrapper[4804]: E0128 11:44:32.909733 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerName="nova-api-log" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.909756 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerName="nova-api-log" Jan 28 11:44:32 crc kubenswrapper[4804]: E0128 11:44:32.909850 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerName="nova-api-api" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.909857 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerName="nova-api-api" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.910067 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerName="nova-api-log" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.910116 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="fc89087f-fead-4af8-b13c-67af8c77e7f7" containerName="nova-api-api" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.911347 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.913457 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.916838 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.919856 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.936082 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9716bec9-6c7d-49e3-8c79-ba4c723d8be9" path="/var/lib/kubelet/pods/9716bec9-6c7d-49e3-8c79-ba4c723d8be9/volumes" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.936768 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc89087f-fead-4af8-b13c-67af8c77e7f7" path="/var/lib/kubelet/pods/fc89087f-fead-4af8-b13c-67af8c77e7f7/volumes" Jan 28 11:44:32 crc kubenswrapper[4804]: I0128 11:44:32.939084 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.045933 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.045988 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-logs\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.046127 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-public-tls-certs\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.046207 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.046429 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-config-data\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.046531 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6jb8\" (UniqueName: \"kubernetes.io/projected/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-kube-api-access-d6jb8\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.148986 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.149026 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-logs\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.149050 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-public-tls-certs\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.149069 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.149138 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-config-data\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.149181 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d6jb8\" (UniqueName: \"kubernetes.io/projected/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-kube-api-access-d6jb8\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.149464 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-logs\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.155751 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-internal-tls-certs\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.156453 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.160538 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-public-tls-certs\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.164626 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-config-data\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.165501 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d6jb8\" (UniqueName: \"kubernetes.io/projected/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-kube-api-access-d6jb8\") pod \"nova-api-0\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.233630 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.824462 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.859677 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650","Type":"ContainerStarted","Data":"b308f4806837516327e93a19a8f6375deeb1fd9edc0b6c41208476dcc8be7a1b"} Jan 28 11:44:33 crc kubenswrapper[4804]: I0128 11:44:33.861716 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90f5a2ef-6224-4af8-8bba-32c689a960f1","Type":"ContainerStarted","Data":"e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4"} Jan 28 11:44:34 crc kubenswrapper[4804]: I0128 11:44:34.147152 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:34 crc kubenswrapper[4804]: I0128 11:44:34.169359 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:34 crc kubenswrapper[4804]: I0128 11:44:34.872034 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650","Type":"ContainerStarted","Data":"88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073"} Jan 28 11:44:34 crc kubenswrapper[4804]: I0128 11:44:34.872379 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650","Type":"ContainerStarted","Data":"19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31"} Jan 28 11:44:34 crc kubenswrapper[4804]: I0128 11:44:34.876671 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90f5a2ef-6224-4af8-8bba-32c689a960f1","Type":"ContainerStarted","Data":"e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342"} Jan 28 11:44:34 crc kubenswrapper[4804]: I0128 11:44:34.898855 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.898834278 podStartE2EDuration="2.898834278s" podCreationTimestamp="2026-01-28 11:44:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:44:34.889088172 +0000 UTC m=+1350.683968176" watchObservedRunningTime="2026-01-28 11:44:34.898834278 +0000 UTC m=+1350.693714272" Jan 28 11:44:34 crc kubenswrapper[4804]: I0128 11:44:34.904435 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.067806 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-29mtd"] Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.069017 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.071670 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.071984 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.078328 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-29mtd"] Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.190228 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-config-data\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.190500 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-scripts\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.190556 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4hljr\" (UniqueName: \"kubernetes.io/projected/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-kube-api-access-4hljr\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.190686 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.292110 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-scripts\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.292163 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4hljr\" (UniqueName: \"kubernetes.io/projected/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-kube-api-access-4hljr\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.292218 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.292303 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-config-data\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.297785 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.298247 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-config-data\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.298469 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-scripts\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.309017 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4hljr\" (UniqueName: \"kubernetes.io/projected/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-kube-api-access-4hljr\") pod \"nova-cell1-cell-mapping-29mtd\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.411379 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:35 crc kubenswrapper[4804]: I0128 11:44:35.872023 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-29mtd"] Jan 28 11:44:36 crc kubenswrapper[4804]: I0128 11:44:36.336086 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:44:36 crc kubenswrapper[4804]: I0128 11:44:36.402775 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-9f892"] Jan 28 11:44:36 crc kubenswrapper[4804]: I0128 11:44:36.403055 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-865f5d856f-9f892" podUID="913fe193-1d5f-4561-9618-fde749a25a1d" containerName="dnsmasq-dns" containerID="cri-o://a420e8ae687c2c995a304fcdb8d308a9f54e0ef0b5158ed4df80a9da46192b9f" gracePeriod=10 Jan 28 11:44:36 crc kubenswrapper[4804]: I0128 11:44:36.896434 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90f5a2ef-6224-4af8-8bba-32c689a960f1","Type":"ContainerStarted","Data":"4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4"} Jan 28 11:44:36 crc kubenswrapper[4804]: I0128 11:44:36.901294 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-29mtd" event={"ID":"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb","Type":"ContainerStarted","Data":"2c2804f6826c0c8a401ed21f9d0d5b1726c6192dce5dc3765fa6bb65769860e7"} Jan 28 11:44:36 crc kubenswrapper[4804]: I0128 11:44:36.901336 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-29mtd" event={"ID":"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb","Type":"ContainerStarted","Data":"cd4b266430faeba6867917a0825a451df9444fe70269f695949d4c5d992bc8b4"} Jan 28 11:44:36 crc kubenswrapper[4804]: I0128 11:44:36.905006 4804 generic.go:334] "Generic (PLEG): container finished" podID="913fe193-1d5f-4561-9618-fde749a25a1d" containerID="a420e8ae687c2c995a304fcdb8d308a9f54e0ef0b5158ed4df80a9da46192b9f" exitCode=0 Jan 28 11:44:36 crc kubenswrapper[4804]: I0128 11:44:36.905046 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-9f892" event={"ID":"913fe193-1d5f-4561-9618-fde749a25a1d","Type":"ContainerDied","Data":"a420e8ae687c2c995a304fcdb8d308a9f54e0ef0b5158ed4df80a9da46192b9f"} Jan 28 11:44:36 crc kubenswrapper[4804]: I0128 11:44:36.931376 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-29mtd" podStartSLOduration=1.931360303 podStartE2EDuration="1.931360303s" podCreationTimestamp="2026-01-28 11:44:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:44:36.926306715 +0000 UTC m=+1352.721186699" watchObservedRunningTime="2026-01-28 11:44:36.931360303 +0000 UTC m=+1352.726240287" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.045803 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.143723 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-config\") pod \"913fe193-1d5f-4561-9618-fde749a25a1d\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.143783 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-sb\") pod \"913fe193-1d5f-4561-9618-fde749a25a1d\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.143867 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l7t2v\" (UniqueName: \"kubernetes.io/projected/913fe193-1d5f-4561-9618-fde749a25a1d-kube-api-access-l7t2v\") pod \"913fe193-1d5f-4561-9618-fde749a25a1d\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.143930 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-nb\") pod \"913fe193-1d5f-4561-9618-fde749a25a1d\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.143975 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-svc\") pod \"913fe193-1d5f-4561-9618-fde749a25a1d\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.144015 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-swift-storage-0\") pod \"913fe193-1d5f-4561-9618-fde749a25a1d\" (UID: \"913fe193-1d5f-4561-9618-fde749a25a1d\") " Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.163265 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/913fe193-1d5f-4561-9618-fde749a25a1d-kube-api-access-l7t2v" (OuterVolumeSpecName: "kube-api-access-l7t2v") pod "913fe193-1d5f-4561-9618-fde749a25a1d" (UID: "913fe193-1d5f-4561-9618-fde749a25a1d"). InnerVolumeSpecName "kube-api-access-l7t2v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.207336 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "913fe193-1d5f-4561-9618-fde749a25a1d" (UID: "913fe193-1d5f-4561-9618-fde749a25a1d"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.208581 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-config" (OuterVolumeSpecName: "config") pod "913fe193-1d5f-4561-9618-fde749a25a1d" (UID: "913fe193-1d5f-4561-9618-fde749a25a1d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.229985 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "913fe193-1d5f-4561-9618-fde749a25a1d" (UID: "913fe193-1d5f-4561-9618-fde749a25a1d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.235525 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "913fe193-1d5f-4561-9618-fde749a25a1d" (UID: "913fe193-1d5f-4561-9618-fde749a25a1d"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.247463 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.247508 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l7t2v\" (UniqueName: \"kubernetes.io/projected/913fe193-1d5f-4561-9618-fde749a25a1d-kube-api-access-l7t2v\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.247525 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.247537 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.247548 4804 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.251144 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "913fe193-1d5f-4561-9618-fde749a25a1d" (UID: "913fe193-1d5f-4561-9618-fde749a25a1d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.349871 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/913fe193-1d5f-4561-9618-fde749a25a1d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.917335 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-865f5d856f-9f892" event={"ID":"913fe193-1d5f-4561-9618-fde749a25a1d","Type":"ContainerDied","Data":"2cfff1780e426c4b862ba06c4d5d217da66ed95a6d5a815238b1e3776a2afeb4"} Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.917421 4804 scope.go:117] "RemoveContainer" containerID="a420e8ae687c2c995a304fcdb8d308a9f54e0ef0b5158ed4df80a9da46192b9f" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.917368 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-865f5d856f-9f892" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.950700 4804 scope.go:117] "RemoveContainer" containerID="5b66ffd3825053b82c96be643a5c4b3e14230fd04a94235eb6e84c88e45a3ddc" Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.971502 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-9f892"] Jan 28 11:44:37 crc kubenswrapper[4804]: I0128 11:44:37.982425 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-865f5d856f-9f892"] Jan 28 11:44:38 crc kubenswrapper[4804]: I0128 11:44:38.930748 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="913fe193-1d5f-4561-9618-fde749a25a1d" path="/var/lib/kubelet/pods/913fe193-1d5f-4561-9618-fde749a25a1d/volumes" Jan 28 11:44:39 crc kubenswrapper[4804]: I0128 11:44:39.937725 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90f5a2ef-6224-4af8-8bba-32c689a960f1","Type":"ContainerStarted","Data":"1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c"} Jan 28 11:44:39 crc kubenswrapper[4804]: I0128 11:44:39.938020 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Jan 28 11:44:39 crc kubenswrapper[4804]: I0128 11:44:39.979581 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.448041488 podStartE2EDuration="8.979552632s" podCreationTimestamp="2026-01-28 11:44:31 +0000 UTC" firstStartedPulling="2026-01-28 11:44:32.738217431 +0000 UTC m=+1348.533097415" lastFinishedPulling="2026-01-28 11:44:39.269728575 +0000 UTC m=+1355.064608559" observedRunningTime="2026-01-28 11:44:39.965559854 +0000 UTC m=+1355.760439838" watchObservedRunningTime="2026-01-28 11:44:39.979552632 +0000 UTC m=+1355.774432616" Jan 28 11:44:41 crc kubenswrapper[4804]: I0128 11:44:41.958068 4804 generic.go:334] "Generic (PLEG): container finished" podID="cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb" containerID="2c2804f6826c0c8a401ed21f9d0d5b1726c6192dce5dc3765fa6bb65769860e7" exitCode=0 Jan 28 11:44:41 crc kubenswrapper[4804]: I0128 11:44:41.958309 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-29mtd" event={"ID":"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb","Type":"ContainerDied","Data":"2c2804f6826c0c8a401ed21f9d0d5b1726c6192dce5dc3765fa6bb65769860e7"} Jan 28 11:44:42 crc kubenswrapper[4804]: I0128 11:44:42.582374 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:44:42 crc kubenswrapper[4804]: I0128 11:44:42.582427 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:44:42 crc kubenswrapper[4804]: I0128 11:44:42.582469 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:44:42 crc kubenswrapper[4804]: I0128 11:44:42.583226 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4bffdd4d5a4ad0d46a47b95458a7c8bdaf05a4c4019b6b412dce10eb63d37e95"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 11:44:42 crc kubenswrapper[4804]: I0128 11:44:42.583293 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://4bffdd4d5a4ad0d46a47b95458a7c8bdaf05a4c4019b6b412dce10eb63d37e95" gracePeriod=600 Jan 28 11:44:42 crc kubenswrapper[4804]: I0128 11:44:42.972127 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="4bffdd4d5a4ad0d46a47b95458a7c8bdaf05a4c4019b6b412dce10eb63d37e95" exitCode=0 Jan 28 11:44:42 crc kubenswrapper[4804]: I0128 11:44:42.972195 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"4bffdd4d5a4ad0d46a47b95458a7c8bdaf05a4c4019b6b412dce10eb63d37e95"} Jan 28 11:44:42 crc kubenswrapper[4804]: I0128 11:44:42.972607 4804 scope.go:117] "RemoveContainer" containerID="ed6af6b086af0e36078ceaad545a02650a81d6b24e2afd021938bf20fba0d1ad" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.235562 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.235618 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.340349 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.369377 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-config-data\") pod \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.369422 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-scripts\") pod \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.369541 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-combined-ca-bundle\") pod \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.369636 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hljr\" (UniqueName: \"kubernetes.io/projected/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-kube-api-access-4hljr\") pod \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\" (UID: \"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb\") " Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.376964 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-scripts" (OuterVolumeSpecName: "scripts") pod "cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb" (UID: "cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.381960 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-kube-api-access-4hljr" (OuterVolumeSpecName: "kube-api-access-4hljr") pod "cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb" (UID: "cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb"). InnerVolumeSpecName "kube-api-access-4hljr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.402415 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb" (UID: "cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.405555 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-config-data" (OuterVolumeSpecName: "config-data") pod "cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb" (UID: "cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.471729 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4hljr\" (UniqueName: \"kubernetes.io/projected/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-kube-api-access-4hljr\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.472034 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.472102 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.472163 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.986539 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-29mtd" event={"ID":"cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb","Type":"ContainerDied","Data":"cd4b266430faeba6867917a0825a451df9444fe70269f695949d4c5d992bc8b4"} Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.986586 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd4b266430faeba6867917a0825a451df9444fe70269f695949d4c5d992bc8b4" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.986647 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-29mtd" Jan 28 11:44:43 crc kubenswrapper[4804]: I0128 11:44:43.995429 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d"} Jan 28 11:44:44 crc kubenswrapper[4804]: I0128 11:44:44.218059 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:44 crc kubenswrapper[4804]: I0128 11:44:44.218623 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerName="nova-api-log" containerID="cri-o://19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31" gracePeriod=30 Jan 28 11:44:44 crc kubenswrapper[4804]: I0128 11:44:44.218755 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerName="nova-api-api" containerID="cri-o://88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073" gracePeriod=30 Jan 28 11:44:44 crc kubenswrapper[4804]: I0128 11:44:44.230118 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": EOF" Jan 28 11:44:44 crc kubenswrapper[4804]: I0128 11:44:44.231302 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:44:44 crc kubenswrapper[4804]: I0128 11:44:44.231508 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711" containerName="nova-scheduler-scheduler" containerID="cri-o://79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1" gracePeriod=30 Jan 28 11:44:44 crc kubenswrapper[4804]: I0128 11:44:44.237647 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.205:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 11:44:44 crc kubenswrapper[4804]: I0128 11:44:44.304049 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:44:44 crc kubenswrapper[4804]: I0128 11:44:44.311251 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-log" containerID="cri-o://ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f" gracePeriod=30 Jan 28 11:44:44 crc kubenswrapper[4804]: I0128 11:44:44.311958 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-metadata" containerID="cri-o://2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a" gracePeriod=30 Jan 28 11:44:45 crc kubenswrapper[4804]: I0128 11:44:45.017800 4804 generic.go:334] "Generic (PLEG): container finished" podID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerID="19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31" exitCode=143 Jan 28 11:44:45 crc kubenswrapper[4804]: I0128 11:44:45.017916 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650","Type":"ContainerDied","Data":"19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31"} Jan 28 11:44:45 crc kubenswrapper[4804]: I0128 11:44:45.021366 4804 generic.go:334] "Generic (PLEG): container finished" podID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerID="ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f" exitCode=143 Jan 28 11:44:45 crc kubenswrapper[4804]: I0128 11:44:45.021465 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3","Type":"ContainerDied","Data":"ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f"} Jan 28 11:44:47 crc kubenswrapper[4804]: I0128 11:44:47.473434 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:51214->10.217.0.196:8775: read: connection reset by peer" Jan 28 11:44:47 crc kubenswrapper[4804]: I0128 11:44:47.473541 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.196:8775/\": read tcp 10.217.0.2:51198->10.217.0.196:8775: read: connection reset by peer" Jan 28 11:44:47 crc kubenswrapper[4804]: I0128 11:44:47.927569 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:44:47 crc kubenswrapper[4804]: I0128 11:44:47.957248 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-logs\") pod \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " Jan 28 11:44:47 crc kubenswrapper[4804]: I0128 11:44:47.957367 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-combined-ca-bundle\") pod \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " Jan 28 11:44:47 crc kubenswrapper[4804]: I0128 11:44:47.957636 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-config-data\") pod \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " Jan 28 11:44:47 crc kubenswrapper[4804]: I0128 11:44:47.957667 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zcd72\" (UniqueName: \"kubernetes.io/projected/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-kube-api-access-zcd72\") pod \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " Jan 28 11:44:47 crc kubenswrapper[4804]: I0128 11:44:47.957810 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-nova-metadata-tls-certs\") pod \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\" (UID: \"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3\") " Jan 28 11:44:47 crc kubenswrapper[4804]: I0128 11:44:47.958331 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-logs" (OuterVolumeSpecName: "logs") pod "60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" (UID: "60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:44:47 crc kubenswrapper[4804]: I0128 11:44:47.958732 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:47 crc kubenswrapper[4804]: I0128 11:44:47.968462 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-kube-api-access-zcd72" (OuterVolumeSpecName: "kube-api-access-zcd72") pod "60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" (UID: "60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3"). InnerVolumeSpecName "kube-api-access-zcd72". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.042136 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-config-data" (OuterVolumeSpecName: "config-data") pod "60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" (UID: "60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.039554 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" (UID: "60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.082301 4804 generic.go:334] "Generic (PLEG): container finished" podID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerID="2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a" exitCode=0 Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.082390 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" (UID: "60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.082528 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.084260 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3","Type":"ContainerDied","Data":"2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a"} Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.084314 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3","Type":"ContainerDied","Data":"59b87bf7f61b635191f6dbdc4606dc428f7e447869edcc7c8d84ce1e273ec312"} Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.084347 4804 scope.go:117] "RemoveContainer" containerID="2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.093175 4804 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.093241 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.093258 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.093281 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zcd72\" (UniqueName: \"kubernetes.io/projected/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3-kube-api-access-zcd72\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.127908 4804 scope.go:117] "RemoveContainer" containerID="ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.156518 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.176408 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.178065 4804 scope.go:117] "RemoveContainer" containerID="2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a" Jan 28 11:44:48 crc kubenswrapper[4804]: E0128 11:44:48.180371 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a\": container with ID starting with 2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a not found: ID does not exist" containerID="2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.180422 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a"} err="failed to get container status \"2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a\": rpc error: code = NotFound desc = could not find container \"2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a\": container with ID starting with 2255e7a5e510e47e0b8ff535127f092aff2df9b12cec4060555423bbf804af8a not found: ID does not exist" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.180450 4804 scope.go:117] "RemoveContainer" containerID="ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f" Jan 28 11:44:48 crc kubenswrapper[4804]: E0128 11:44:48.186039 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f\": container with ID starting with ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f not found: ID does not exist" containerID="ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.186107 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f"} err="failed to get container status \"ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f\": rpc error: code = NotFound desc = could not find container \"ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f\": container with ID starting with ba89583508cf562ba7effd03f3550a5ef331cd649029d781c56c3a85245b8e2f not found: ID does not exist" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.194535 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:44:48 crc kubenswrapper[4804]: E0128 11:44:48.195280 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb" containerName="nova-manage" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.195311 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb" containerName="nova-manage" Jan 28 11:44:48 crc kubenswrapper[4804]: E0128 11:44:48.195339 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-log" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.195349 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-log" Jan 28 11:44:48 crc kubenswrapper[4804]: E0128 11:44:48.195369 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="913fe193-1d5f-4561-9618-fde749a25a1d" containerName="dnsmasq-dns" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.195378 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="913fe193-1d5f-4561-9618-fde749a25a1d" containerName="dnsmasq-dns" Jan 28 11:44:48 crc kubenswrapper[4804]: E0128 11:44:48.195422 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="913fe193-1d5f-4561-9618-fde749a25a1d" containerName="init" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.195432 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="913fe193-1d5f-4561-9618-fde749a25a1d" containerName="init" Jan 28 11:44:48 crc kubenswrapper[4804]: E0128 11:44:48.195467 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-metadata" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.195473 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-metadata" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.195735 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-metadata" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.195752 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="913fe193-1d5f-4561-9618-fde749a25a1d" containerName="dnsmasq-dns" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.195769 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb" containerName="nova-manage" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.195781 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" containerName="nova-metadata-log" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.197765 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.201813 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.202222 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.217712 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.298128 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.298223 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67wq4\" (UniqueName: \"kubernetes.io/projected/b0bfaf6b-2c74-4812-965a-4db80f0c4527-kube-api-access-67wq4\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.298326 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.298611 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0bfaf6b-2c74-4812-965a-4db80f0c4527-logs\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.298850 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-config-data\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.401705 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.403221 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0bfaf6b-2c74-4812-965a-4db80f0c4527-logs\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.403433 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-config-data\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.403633 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.403747 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67wq4\" (UniqueName: \"kubernetes.io/projected/b0bfaf6b-2c74-4812-965a-4db80f0c4527-kube-api-access-67wq4\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.405067 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0bfaf6b-2c74-4812-965a-4db80f0c4527-logs\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.407648 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.409404 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-config-data\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.419973 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.426812 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67wq4\" (UniqueName: \"kubernetes.io/projected/b0bfaf6b-2c74-4812-965a-4db80f0c4527-kube-api-access-67wq4\") pod \"nova-metadata-0\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.509186 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.528026 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.608616 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-combined-ca-bundle\") pod \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.608716 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rphbz\" (UniqueName: \"kubernetes.io/projected/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-kube-api-access-rphbz\") pod \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.608809 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-config-data\") pod \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\" (UID: \"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711\") " Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.613953 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-kube-api-access-rphbz" (OuterVolumeSpecName: "kube-api-access-rphbz") pod "99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711" (UID: "99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711"). InnerVolumeSpecName "kube-api-access-rphbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.634413 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711" (UID: "99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.645337 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-config-data" (OuterVolumeSpecName: "config-data") pod "99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711" (UID: "99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.715736 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.715789 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rphbz\" (UniqueName: \"kubernetes.io/projected/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-kube-api-access-rphbz\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.715806 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:48 crc kubenswrapper[4804]: I0128 11:44:48.938241 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3" path="/var/lib/kubelet/pods/60dd1bc0-1015-4f2e-8fe0-4e33e2fe36d3/volumes" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.024571 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:44:49 crc kubenswrapper[4804]: W0128 11:44:49.031944 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb0bfaf6b_2c74_4812_965a_4db80f0c4527.slice/crio-dde0b061f1847f788c0ad04e0fb5557d71997e0c0bf63a89d091b0cead6b787e WatchSource:0}: Error finding container dde0b061f1847f788c0ad04e0fb5557d71997e0c0bf63a89d091b0cead6b787e: Status 404 returned error can't find the container with id dde0b061f1847f788c0ad04e0fb5557d71997e0c0bf63a89d091b0cead6b787e Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.091780 4804 generic.go:334] "Generic (PLEG): container finished" podID="99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711" containerID="79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1" exitCode=0 Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.091843 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.091972 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711","Type":"ContainerDied","Data":"79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1"} Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.092018 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711","Type":"ContainerDied","Data":"f3e4f04cf9239f80de0615a7c82510a14c24dd1f74e0a4378ba5cd12abed76cf"} Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.092042 4804 scope.go:117] "RemoveContainer" containerID="79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.095746 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0bfaf6b-2c74-4812-965a-4db80f0c4527","Type":"ContainerStarted","Data":"dde0b061f1847f788c0ad04e0fb5557d71997e0c0bf63a89d091b0cead6b787e"} Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.126875 4804 scope.go:117] "RemoveContainer" containerID="79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1" Jan 28 11:44:49 crc kubenswrapper[4804]: E0128 11:44:49.127470 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1\": container with ID starting with 79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1 not found: ID does not exist" containerID="79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.127707 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1"} err="failed to get container status \"79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1\": rpc error: code = NotFound desc = could not find container \"79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1\": container with ID starting with 79b8700eae71a0b3152c2b06a1c9b6928b5c00771e395344d00b52ff085424f1 not found: ID does not exist" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.159567 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.175580 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.184831 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:44:49 crc kubenswrapper[4804]: E0128 11:44:49.185218 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711" containerName="nova-scheduler-scheduler" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.185235 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711" containerName="nova-scheduler-scheduler" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.185418 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711" containerName="nova-scheduler-scheduler" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.186036 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.188868 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.199187 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.224305 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.224351 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xqdj4\" (UniqueName: \"kubernetes.io/projected/469a0049-480f-4cde-848d-4b11cb54130b-kube-api-access-xqdj4\") pod \"nova-scheduler-0\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.224395 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-config-data\") pod \"nova-scheduler-0\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.327625 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.327688 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xqdj4\" (UniqueName: \"kubernetes.io/projected/469a0049-480f-4cde-848d-4b11cb54130b-kube-api-access-xqdj4\") pod \"nova-scheduler-0\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.327745 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-config-data\") pod \"nova-scheduler-0\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.333931 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-config-data\") pod \"nova-scheduler-0\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.333984 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.350539 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xqdj4\" (UniqueName: \"kubernetes.io/projected/469a0049-480f-4cde-848d-4b11cb54130b-kube-api-access-xqdj4\") pod \"nova-scheduler-0\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.509553 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:44:49 crc kubenswrapper[4804]: I0128 11:44:49.977514 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.073452 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.113211 4804 generic.go:334] "Generic (PLEG): container finished" podID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerID="88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073" exitCode=0 Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.113312 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650","Type":"ContainerDied","Data":"88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073"} Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.113596 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650","Type":"ContainerDied","Data":"b308f4806837516327e93a19a8f6375deeb1fd9edc0b6c41208476dcc8be7a1b"} Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.113615 4804 scope.go:117] "RemoveContainer" containerID="88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.113322 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.116519 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"469a0049-480f-4cde-848d-4b11cb54130b","Type":"ContainerStarted","Data":"0f20d09f4e22850dccdafc066e7822cd90278816628e2fe4c307f19e6234a0ef"} Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.126502 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0bfaf6b-2c74-4812-965a-4db80f0c4527","Type":"ContainerStarted","Data":"f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca"} Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.126552 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0bfaf6b-2c74-4812-965a-4db80f0c4527","Type":"ContainerStarted","Data":"5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06"} Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.142236 4804 scope.go:117] "RemoveContainer" containerID="19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.143204 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-config-data\") pod \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.143266 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-public-tls-certs\") pod \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.143639 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-logs\") pod \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.143763 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-combined-ca-bundle\") pod \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.143800 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6jb8\" (UniqueName: \"kubernetes.io/projected/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-kube-api-access-d6jb8\") pod \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.143928 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-internal-tls-certs\") pod \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\" (UID: \"c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650\") " Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.144331 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-logs" (OuterVolumeSpecName: "logs") pod "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" (UID: "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.146695 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.158809 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.158785826 podStartE2EDuration="2.158785826s" podCreationTimestamp="2026-01-28 11:44:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:44:50.156401972 +0000 UTC m=+1365.951281956" watchObservedRunningTime="2026-01-28 11:44:50.158785826 +0000 UTC m=+1365.953665810" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.164126 4804 scope.go:117] "RemoveContainer" containerID="88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073" Jan 28 11:44:50 crc kubenswrapper[4804]: E0128 11:44:50.164468 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073\": container with ID starting with 88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073 not found: ID does not exist" containerID="88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.164568 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073"} err="failed to get container status \"88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073\": rpc error: code = NotFound desc = could not find container \"88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073\": container with ID starting with 88a85132aca12c25178f1d2ccb042aa6f82421f5695e13252723ca8a8320f073 not found: ID does not exist" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.164644 4804 scope.go:117] "RemoveContainer" containerID="19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31" Jan 28 11:44:50 crc kubenswrapper[4804]: E0128 11:44:50.165101 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31\": container with ID starting with 19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31 not found: ID does not exist" containerID="19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.165124 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31"} err="failed to get container status \"19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31\": rpc error: code = NotFound desc = could not find container \"19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31\": container with ID starting with 19f5fc26647d0f19b1397dec93d08d902ce321b4244bdae32a8dd70c77163b31 not found: ID does not exist" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.170346 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-kube-api-access-d6jb8" (OuterVolumeSpecName: "kube-api-access-d6jb8") pod "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" (UID: "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650"). InnerVolumeSpecName "kube-api-access-d6jb8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.199949 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" (UID: "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.202576 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-config-data" (OuterVolumeSpecName: "config-data") pod "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" (UID: "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.222515 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" (UID: "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.224438 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" (UID: "c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.249405 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.249483 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6jb8\" (UniqueName: \"kubernetes.io/projected/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-kube-api-access-d6jb8\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.249514 4804 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.249644 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.249657 4804 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.471747 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.495362 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.514839 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:50 crc kubenswrapper[4804]: E0128 11:44:50.515509 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerName="nova-api-api" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.515533 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerName="nova-api-api" Jan 28 11:44:50 crc kubenswrapper[4804]: E0128 11:44:50.515556 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerName="nova-api-log" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.515565 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerName="nova-api-log" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.515782 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerName="nova-api-api" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.515811 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" containerName="nova-api-log" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.517232 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.521578 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.521614 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.522443 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.526340 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.556695 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-config-data\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.556976 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.557016 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae0fb199-797a-40c6-8c71-3b5a976b6c61-logs\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.557063 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.557202 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qm8l\" (UniqueName: \"kubernetes.io/projected/ae0fb199-797a-40c6-8c71-3b5a976b6c61-kube-api-access-4qm8l\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.557287 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-public-tls-certs\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.658638 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.658710 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae0fb199-797a-40c6-8c71-3b5a976b6c61-logs\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.658743 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.658821 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qm8l\" (UniqueName: \"kubernetes.io/projected/ae0fb199-797a-40c6-8c71-3b5a976b6c61-kube-api-access-4qm8l\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.658911 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-public-tls-certs\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.658956 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-config-data\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.659739 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae0fb199-797a-40c6-8c71-3b5a976b6c61-logs\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.662932 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-internal-tls-certs\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.663004 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-config-data\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.664179 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-public-tls-certs\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.666177 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.684652 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qm8l\" (UniqueName: \"kubernetes.io/projected/ae0fb199-797a-40c6-8c71-3b5a976b6c61-kube-api-access-4qm8l\") pod \"nova-api-0\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.901631 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.943495 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711" path="/var/lib/kubelet/pods/99ee8dc6-b4c2-46ef-a2a5-3ba27ff2f711/volumes" Jan 28 11:44:50 crc kubenswrapper[4804]: I0128 11:44:50.944480 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650" path="/var/lib/kubelet/pods/c7a57b7c-8e2d-4bbe-9c1c-80b8c1c87650/volumes" Jan 28 11:44:51 crc kubenswrapper[4804]: I0128 11:44:51.145689 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"469a0049-480f-4cde-848d-4b11cb54130b","Type":"ContainerStarted","Data":"df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a"} Jan 28 11:44:51 crc kubenswrapper[4804]: I0128 11:44:51.173019 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.172998193 podStartE2EDuration="2.172998193s" podCreationTimestamp="2026-01-28 11:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:44:51.162192836 +0000 UTC m=+1366.957072840" watchObservedRunningTime="2026-01-28 11:44:51.172998193 +0000 UTC m=+1366.967878187" Jan 28 11:44:51 crc kubenswrapper[4804]: I0128 11:44:51.406019 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:44:51 crc kubenswrapper[4804]: W0128 11:44:51.415379 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podae0fb199_797a_40c6_8c71_3b5a976b6c61.slice/crio-b68d102d0c8eb5fad0a64accd8eecc6866ce24c76046948f3cee122445962edf WatchSource:0}: Error finding container b68d102d0c8eb5fad0a64accd8eecc6866ce24c76046948f3cee122445962edf: Status 404 returned error can't find the container with id b68d102d0c8eb5fad0a64accd8eecc6866ce24c76046948f3cee122445962edf Jan 28 11:44:52 crc kubenswrapper[4804]: I0128 11:44:52.160819 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae0fb199-797a-40c6-8c71-3b5a976b6c61","Type":"ContainerStarted","Data":"5fce3701f770e3f1d822ae3950ad420da6c9b44d0df68cf4bd4c8ebf86d62649"} Jan 28 11:44:52 crc kubenswrapper[4804]: I0128 11:44:52.161409 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae0fb199-797a-40c6-8c71-3b5a976b6c61","Type":"ContainerStarted","Data":"61e2afaf3a01a165673d5c42d38ef739fe857b1c1e03f62547614151ae809226"} Jan 28 11:44:52 crc kubenswrapper[4804]: I0128 11:44:52.161433 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae0fb199-797a-40c6-8c71-3b5a976b6c61","Type":"ContainerStarted","Data":"b68d102d0c8eb5fad0a64accd8eecc6866ce24c76046948f3cee122445962edf"} Jan 28 11:44:52 crc kubenswrapper[4804]: I0128 11:44:52.202434 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.202403803 podStartE2EDuration="2.202403803s" podCreationTimestamp="2026-01-28 11:44:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:44:52.185154675 +0000 UTC m=+1367.980034659" watchObservedRunningTime="2026-01-28 11:44:52.202403803 +0000 UTC m=+1367.997283787" Jan 28 11:44:53 crc kubenswrapper[4804]: I0128 11:44:53.529313 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 11:44:53 crc kubenswrapper[4804]: I0128 11:44:53.530657 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Jan 28 11:44:54 crc kubenswrapper[4804]: I0128 11:44:54.510864 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Jan 28 11:44:58 crc kubenswrapper[4804]: I0128 11:44:58.528483 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 11:44:58 crc kubenswrapper[4804]: I0128 11:44:58.529056 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Jan 28 11:44:59 crc kubenswrapper[4804]: I0128 11:44:59.510836 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Jan 28 11:44:59 crc kubenswrapper[4804]: I0128 11:44:59.540118 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Jan 28 11:44:59 crc kubenswrapper[4804]: I0128 11:44:59.543206 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 11:44:59 crc kubenswrapper[4804]: I0128 11:44:59.543241 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.163327 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr"] Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.165303 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.167514 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.167666 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.173531 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr"] Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.183991 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zstbn\" (UniqueName: \"kubernetes.io/projected/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-kube-api-access-zstbn\") pod \"collect-profiles-29493345-psbzr\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.184082 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-config-volume\") pod \"collect-profiles-29493345-psbzr\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.184385 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-secret-volume\") pod \"collect-profiles-29493345-psbzr\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.257531 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.286727 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-secret-volume\") pod \"collect-profiles-29493345-psbzr\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.287144 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zstbn\" (UniqueName: \"kubernetes.io/projected/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-kube-api-access-zstbn\") pod \"collect-profiles-29493345-psbzr\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.287274 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-config-volume\") pod \"collect-profiles-29493345-psbzr\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.288153 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-config-volume\") pod \"collect-profiles-29493345-psbzr\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.295017 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-secret-volume\") pod \"collect-profiles-29493345-psbzr\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.310137 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zstbn\" (UniqueName: \"kubernetes.io/projected/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-kube-api-access-zstbn\") pod \"collect-profiles-29493345-psbzr\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:00 crc kubenswrapper[4804]: I0128 11:45:00.488941 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:01 crc kubenswrapper[4804]: I0128 11:45:00.902473 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 11:45:01 crc kubenswrapper[4804]: I0128 11:45:00.902796 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Jan 28 11:45:01 crc kubenswrapper[4804]: I0128 11:45:01.767650 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr"] Jan 28 11:45:01 crc kubenswrapper[4804]: I0128 11:45:01.929063 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 11:45:01 crc kubenswrapper[4804]: I0128 11:45:01.929076 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.209:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 28 11:45:02 crc kubenswrapper[4804]: I0128 11:45:02.255661 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" event={"ID":"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc","Type":"ContainerStarted","Data":"ec4494c033a2934fc01293e9dd81cb1af39c7d20a6e53ef7ee0ed4ef65497625"} Jan 28 11:45:02 crc kubenswrapper[4804]: I0128 11:45:02.256046 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" event={"ID":"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc","Type":"ContainerStarted","Data":"85ff122a1d329198ef775f9a9af46551b86e618d0b7980ba248ca6736acc1112"} Jan 28 11:45:02 crc kubenswrapper[4804]: I0128 11:45:02.258665 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Jan 28 11:45:02 crc kubenswrapper[4804]: I0128 11:45:02.283754 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" podStartSLOduration=2.283732974 podStartE2EDuration="2.283732974s" podCreationTimestamp="2026-01-28 11:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-28 11:45:02.277141348 +0000 UTC m=+1378.072021332" watchObservedRunningTime="2026-01-28 11:45:02.283732974 +0000 UTC m=+1378.078612958" Jan 28 11:45:03 crc kubenswrapper[4804]: I0128 11:45:03.265588 4804 generic.go:334] "Generic (PLEG): container finished" podID="deda2a52-b6b6-4b65-87d2-26a7ca06a7dc" containerID="ec4494c033a2934fc01293e9dd81cb1af39c7d20a6e53ef7ee0ed4ef65497625" exitCode=0 Jan 28 11:45:03 crc kubenswrapper[4804]: I0128 11:45:03.265639 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" event={"ID":"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc","Type":"ContainerDied","Data":"ec4494c033a2934fc01293e9dd81cb1af39c7d20a6e53ef7ee0ed4ef65497625"} Jan 28 11:45:04 crc kubenswrapper[4804]: I0128 11:45:04.642082 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:04 crc kubenswrapper[4804]: I0128 11:45:04.774527 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-secret-volume\") pod \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " Jan 28 11:45:04 crc kubenswrapper[4804]: I0128 11:45:04.774629 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-config-volume\") pod \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " Jan 28 11:45:04 crc kubenswrapper[4804]: I0128 11:45:04.774764 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zstbn\" (UniqueName: \"kubernetes.io/projected/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-kube-api-access-zstbn\") pod \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\" (UID: \"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc\") " Jan 28 11:45:04 crc kubenswrapper[4804]: I0128 11:45:04.779922 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-config-volume" (OuterVolumeSpecName: "config-volume") pod "deda2a52-b6b6-4b65-87d2-26a7ca06a7dc" (UID: "deda2a52-b6b6-4b65-87d2-26a7ca06a7dc"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:04 crc kubenswrapper[4804]: I0128 11:45:04.781996 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-kube-api-access-zstbn" (OuterVolumeSpecName: "kube-api-access-zstbn") pod "deda2a52-b6b6-4b65-87d2-26a7ca06a7dc" (UID: "deda2a52-b6b6-4b65-87d2-26a7ca06a7dc"). InnerVolumeSpecName "kube-api-access-zstbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:04 crc kubenswrapper[4804]: I0128 11:45:04.782906 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "deda2a52-b6b6-4b65-87d2-26a7ca06a7dc" (UID: "deda2a52-b6b6-4b65-87d2-26a7ca06a7dc"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:04 crc kubenswrapper[4804]: I0128 11:45:04.876522 4804 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:04 crc kubenswrapper[4804]: I0128 11:45:04.876556 4804 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:04 crc kubenswrapper[4804]: I0128 11:45:04.876566 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zstbn\" (UniqueName: \"kubernetes.io/projected/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc-kube-api-access-zstbn\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:05 crc kubenswrapper[4804]: I0128 11:45:05.285409 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" event={"ID":"deda2a52-b6b6-4b65-87d2-26a7ca06a7dc","Type":"ContainerDied","Data":"85ff122a1d329198ef775f9a9af46551b86e618d0b7980ba248ca6736acc1112"} Jan 28 11:45:05 crc kubenswrapper[4804]: I0128 11:45:05.285465 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85ff122a1d329198ef775f9a9af46551b86e618d0b7980ba248ca6736acc1112" Jan 28 11:45:05 crc kubenswrapper[4804]: I0128 11:45:05.285533 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr" Jan 28 11:45:08 crc kubenswrapper[4804]: I0128 11:45:08.549198 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 11:45:08 crc kubenswrapper[4804]: I0128 11:45:08.550829 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Jan 28 11:45:08 crc kubenswrapper[4804]: I0128 11:45:08.560258 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 11:45:09 crc kubenswrapper[4804]: I0128 11:45:09.345151 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Jan 28 11:45:10 crc kubenswrapper[4804]: I0128 11:45:10.910620 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 11:45:10 crc kubenswrapper[4804]: I0128 11:45:10.910726 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Jan 28 11:45:10 crc kubenswrapper[4804]: I0128 11:45:10.911129 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 11:45:10 crc kubenswrapper[4804]: I0128 11:45:10.911176 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Jan 28 11:45:10 crc kubenswrapper[4804]: I0128 11:45:10.926214 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 11:45:10 crc kubenswrapper[4804]: I0128 11:45:10.926277 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.393600 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstackclient"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.394449 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstackclient" podUID="eaba1c3c-49d4-498e-94b8-9c8cbe8660da" containerName="openstackclient" containerID="cri-o://0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67" gracePeriod=2 Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.406310 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstackclient"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.580692 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.580937 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" containerName="cinder-scheduler" containerID="cri-o://005c93d53e10abe220c87f4440097a40fcd2ee8a29f58966418aa864a302e6f7" gracePeriod=30 Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.581475 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" containerName="probe" containerID="cri-o://c1345bf2b60adbef9b806636ee3887a5869fd85c14cb9679c394104f26a95a2c" gracePeriod=30 Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.627531 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-8522-account-create-update-8fq2p"] Jan 28 11:45:30 crc kubenswrapper[4804]: E0128 11:45:30.628026 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="deda2a52-b6b6-4b65-87d2-26a7ca06a7dc" containerName="collect-profiles" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.628039 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="deda2a52-b6b6-4b65-87d2-26a7ca06a7dc" containerName="collect-profiles" Jan 28 11:45:30 crc kubenswrapper[4804]: E0128 11:45:30.628064 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaba1c3c-49d4-498e-94b8-9c8cbe8660da" containerName="openstackclient" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.628070 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaba1c3c-49d4-498e-94b8-9c8cbe8660da" containerName="openstackclient" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.628247 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="deda2a52-b6b6-4b65-87d2-26a7ca06a7dc" containerName="collect-profiles" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.628274 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaba1c3c-49d4-498e-94b8-9c8cbe8660da" containerName="openstackclient" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.630018 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8522-account-create-update-8fq2p" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.637959 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.650054 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-8522-account-create-update-8fq2p"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.694870 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/root-account-create-update-jqk9s"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.696251 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jqk9s" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.705032 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-mariadb-root-db-secret" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.740354 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jqk9s"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.741075 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69bqc\" (UniqueName: \"kubernetes.io/projected/12271c96-a234-46d8-bc32-80db78339116-kube-api-access-69bqc\") pod \"barbican-8522-account-create-update-8fq2p\" (UID: \"12271c96-a234-46d8-bc32-80db78339116\") " pod="openstack/barbican-8522-account-create-update-8fq2p" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.741217 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12271c96-a234-46d8-bc32-80db78339116-operator-scripts\") pod \"barbican-8522-account-create-update-8fq2p\" (UID: \"12271c96-a234-46d8-bc32-80db78339116\") " pod="openstack/barbican-8522-account-create-update-8fq2p" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.824540 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8522-account-create-update-rlttq"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.842763 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-operator-scripts\") pod \"root-account-create-update-jqk9s\" (UID: \"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8\") " pod="openstack/root-account-create-update-jqk9s" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.842844 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12271c96-a234-46d8-bc32-80db78339116-operator-scripts\") pod \"barbican-8522-account-create-update-8fq2p\" (UID: \"12271c96-a234-46d8-bc32-80db78339116\") " pod="openstack/barbican-8522-account-create-update-8fq2p" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.842925 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-69bqc\" (UniqueName: \"kubernetes.io/projected/12271c96-a234-46d8-bc32-80db78339116-kube-api-access-69bqc\") pod \"barbican-8522-account-create-update-8fq2p\" (UID: \"12271c96-a234-46d8-bc32-80db78339116\") " pod="openstack/barbican-8522-account-create-update-8fq2p" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.842957 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7pcnf\" (UniqueName: \"kubernetes.io/projected/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-kube-api-access-7pcnf\") pod \"root-account-create-update-jqk9s\" (UID: \"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8\") " pod="openstack/root-account-create-update-jqk9s" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.843730 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12271c96-a234-46d8-bc32-80db78339116-operator-scripts\") pod \"barbican-8522-account-create-update-8fq2p\" (UID: \"12271c96-a234-46d8-bc32-80db78339116\") " pod="openstack/barbican-8522-account-create-update-8fq2p" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.847252 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-w544f"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.871477 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.871781 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" containerName="cinder-api-log" containerID="cri-o://b7a5a299ea638aff1b67f737be31752dbc58e62ca2663d443117c669ac5e859a" gracePeriod=30 Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.872249 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" containerName="cinder-api" containerID="cri-o://7b625bd5e08a3fff2579118cb1bfb0f02e6d6eda2e30f536589d6d7b53d87774" gracePeriod=30 Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.895975 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.946520 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7pcnf\" (UniqueName: \"kubernetes.io/projected/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-kube-api-access-7pcnf\") pod \"root-account-create-update-jqk9s\" (UID: \"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8\") " pod="openstack/root-account-create-update-jqk9s" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.946622 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-operator-scripts\") pod \"root-account-create-update-jqk9s\" (UID: \"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8\") " pod="openstack/root-account-create-update-jqk9s" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.946751 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-69bqc\" (UniqueName: \"kubernetes.io/projected/12271c96-a234-46d8-bc32-80db78339116-kube-api-access-69bqc\") pod \"barbican-8522-account-create-update-8fq2p\" (UID: \"12271c96-a234-46d8-bc32-80db78339116\") " pod="openstack/barbican-8522-account-create-update-8fq2p" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.947348 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-operator-scripts\") pod \"root-account-create-update-jqk9s\" (UID: \"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8\") " pod="openstack/root-account-create-update-jqk9s" Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.956590 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8522-account-create-update-rlttq"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.956850 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xtdr8"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.962830 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-w544f"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.975865 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-gtg97"] Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.976155 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-metrics-gtg97" podUID="f7359aec-58b3-4254-8765-cdc131e5f912" containerName="openstack-network-exporter" containerID="cri-o://565156fe636372aa88e628b080b158f58c0e89d805ea93ee8a1f9e78b61b800b" gracePeriod=30 Jan 28 11:45:30 crc kubenswrapper[4804]: I0128 11:45:30.995197 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8522-account-create-update-8fq2p" Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.014007 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-pfzkj"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.036814 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7pcnf\" (UniqueName: \"kubernetes.io/projected/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-kube-api-access-7pcnf\") pod \"root-account-create-update-jqk9s\" (UID: \"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8\") " pod="openstack/root-account-create-update-jqk9s" Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.036897 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-753f-account-create-update-2x2r6"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.039343 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jqk9s" Jan 28 11:45:31 crc kubenswrapper[4804]: E0128 11:45:31.052426 4804 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 28 11:45:31 crc kubenswrapper[4804]: E0128 11:45:31.052479 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data podName:76d127f1-97d9-4552-9bdb-b3482a45951d nodeName:}" failed. No retries permitted until 2026-01-28 11:45:31.552464585 +0000 UTC m=+1407.347344569 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data") pod "rabbitmq-cell1-server-0" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d") : configmap "rabbitmq-cell1-config-data" not found Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.093816 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-753f-account-create-update-2x2r6"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.129773 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-ec8f-account-create-update-wm9f2"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.149830 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-ec8f-account-create-update-wm9f2"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.250757 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.337161 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-wch49"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.390394 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0c6f-account-create-update-j6x65"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.422028 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.422900 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="edcdd787-6628-49ee-abcf-0146c096f547" containerName="ovn-northd" containerID="cri-o://1f6db044032b9ea275036a4c598039837713d6af1c8b750e39682cd377aa7e00" gracePeriod=30 Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.423225 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-northd-0" podUID="edcdd787-6628-49ee-abcf-0146c096f547" containerName="openstack-network-exporter" containerID="cri-o://17400e5f10254b0d771acc135458ad1381f04acdf3cc5817d31b6d3932b519f1" gracePeriod=30 Jan 28 11:45:31 crc kubenswrapper[4804]: E0128 11:45:31.471379 4804 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 28 11:45:31 crc kubenswrapper[4804]: E0128 11:45:31.471448 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data podName:f7c5c969-c4c2-4f76-b3c6-152473159e78 nodeName:}" failed. No retries permitted until 2026-01-28 11:45:31.971428865 +0000 UTC m=+1407.766308949 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data") pod "rabbitmq-server-0" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78") : configmap "rabbitmq-config-data" not found Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.495919 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-wch49"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.525947 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0c6f-account-create-update-j6x65"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.561998 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0c6f-account-create-update-hhm9c"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.563673 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0c6f-account-create-update-hhm9c" Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.570467 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Jan 28 11:45:31 crc kubenswrapper[4804]: E0128 11:45:31.572541 4804 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 28 11:45:31 crc kubenswrapper[4804]: E0128 11:45:31.572627 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data podName:76d127f1-97d9-4552-9bdb-b3482a45951d nodeName:}" failed. No retries permitted until 2026-01-28 11:45:32.572607889 +0000 UTC m=+1408.367487873 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data") pod "rabbitmq-cell1-server-0" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d") : configmap "rabbitmq-cell1-config-data" not found Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.587827 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0c6f-account-create-update-hhm9c"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.635431 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-2swjk"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.676391 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kqkz\" (UniqueName: \"kubernetes.io/projected/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-kube-api-access-2kqkz\") pod \"nova-api-0c6f-account-create-update-hhm9c\" (UID: \"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d\") " pod="openstack/nova-api-0c6f-account-create-update-hhm9c" Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.676476 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-operator-scripts\") pod \"nova-api-0c6f-account-create-update-hhm9c\" (UID: \"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d\") " pod="openstack/nova-api-0c6f-account-create-update-hhm9c" Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.734082 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-a291-account-create-update-dlt8t"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.796330 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kqkz\" (UniqueName: \"kubernetes.io/projected/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-kube-api-access-2kqkz\") pod \"nova-api-0c6f-account-create-update-hhm9c\" (UID: \"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d\") " pod="openstack/nova-api-0c6f-account-create-update-hhm9c" Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.796671 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-operator-scripts\") pod \"nova-api-0c6f-account-create-update-hhm9c\" (UID: \"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d\") " pod="openstack/nova-api-0c6f-account-create-update-hhm9c" Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.801191 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-operator-scripts\") pod \"nova-api-0c6f-account-create-update-hhm9c\" (UID: \"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d\") " pod="openstack/nova-api-0c6f-account-create-update-hhm9c" Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.897210 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kqkz\" (UniqueName: \"kubernetes.io/projected/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-kube-api-access-2kqkz\") pod \"nova-api-0c6f-account-create-update-hhm9c\" (UID: \"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d\") " pod="openstack/nova-api-0c6f-account-create-update-hhm9c" Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.927145 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 11:45:31 crc kubenswrapper[4804]: I0128 11:45:31.927982 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="c6c76352-2487-4098-bbee-579834052292" containerName="openstack-network-exporter" containerID="cri-o://083be3913b9cea293776996ed70c579f5b987734d7d6618ce37907eb76d96885" gracePeriod=300 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.022469 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0c6f-account-create-update-hhm9c" Jan 28 11:45:32 crc kubenswrapper[4804]: E0128 11:45:32.030267 4804 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 28 11:45:32 crc kubenswrapper[4804]: E0128 11:45:32.035251 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data podName:f7c5c969-c4c2-4f76-b3c6-152473159e78 nodeName:}" failed. No retries permitted until 2026-01-28 11:45:33.035209871 +0000 UTC m=+1408.830089855 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data") pod "rabbitmq-server-0" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78") : configmap "rabbitmq-config-data" not found Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.049265 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-2swjk"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.124365 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-a291-account-create-update-dlt8t"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.155770 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-2c81-account-create-update-ldfns"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.219971 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-2c81-account-create-update-ldfns"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.265818 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-bnpvd"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.296162 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-bnpvd"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.339940 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.340425 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-server" containerID="cri-o://c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.340829 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="swift-recon-cron" containerID="cri-o://3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.340875 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="rsync" containerID="cri-o://a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.340997 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-expirer" containerID="cri-o://43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.341032 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-updater" containerID="cri-o://02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.341064 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-auditor" containerID="cri-o://88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.341092 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-replicator" containerID="cri-o://f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.341120 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-server" containerID="cri-o://5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.341155 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-updater" containerID="cri-o://e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.341184 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-auditor" containerID="cri-o://ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.341212 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-replicator" containerID="cri-o://fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.341240 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-server" containerID="cri-o://a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.341280 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-reaper" containerID="cri-o://a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.341309 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-auditor" containerID="cri-o://2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.341340 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-storage-0" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-replicator" containerID="cri-o://1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.432040 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.432668 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" containerName="openstack-network-exporter" containerID="cri-o://7501d75daa32f7ac9da494ff4510c6c7b84e72c6cd5d7a36b873ba97e31ca357" gracePeriod=300 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.477290 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-nb-0" podUID="c6c76352-2487-4098-bbee-579834052292" containerName="ovsdbserver-nb" containerID="cri-o://445cd2aea23cf7159b1e5bbce268d62cd1c9a1d5072f21d98a9181a420bf2e56" gracePeriod=300 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.500775 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-9brzz"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.557138 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-9brzz"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.626206 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-ring-rebalance-jxgc9"] Jan 28 11:45:32 crc kubenswrapper[4804]: E0128 11:45:32.666198 4804 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 28 11:45:32 crc kubenswrapper[4804]: E0128 11:45:32.666259 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data podName:76d127f1-97d9-4552-9bdb-b3482a45951d nodeName:}" failed. No retries permitted until 2026-01-28 11:45:34.666245442 +0000 UTC m=+1410.461125416 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data") pod "rabbitmq-cell1-server-0" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d") : configmap "rabbitmq-cell1-config-data" not found Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.683698 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-659f7cffd6-wm9cj"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.683946 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-659f7cffd6-wm9cj" podUID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerName="placement-log" containerID="cri-o://54143a992b19966c4a0488e9860a42a6b4166527e948b9a61cc651fb19353896" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.684288 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/placement-659f7cffd6-wm9cj" podUID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerName="placement-api" containerID="cri-o://2bc2f4bef5b6e11721d8eabaa519e6625f7ff953fd015c6be0cebef1e6ec65fa" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.714940 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-j9ld2"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.715478 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" podUID="f7cab05f-efa6-4a74-920b-96f8f30f1736" containerName="dnsmasq-dns" containerID="cri-o://91cc51ff2b7594ba6b7c5b83ef291bdad1767dd300aa27e2d6fe9a547161ad93" gracePeriod=10 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.735939 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovsdbserver-sb-0" podUID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" containerName="ovsdbserver-sb" containerID="cri-o://1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2" gracePeriod=300 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.760671 4804 generic.go:334] "Generic (PLEG): container finished" podID="edcdd787-6628-49ee-abcf-0146c096f547" containerID="17400e5f10254b0d771acc135458ad1381f04acdf3cc5817d31b6d3932b519f1" exitCode=2 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.760761 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"edcdd787-6628-49ee-abcf-0146c096f547","Type":"ContainerDied","Data":"17400e5f10254b0d771acc135458ad1381f04acdf3cc5817d31b6d3932b519f1"} Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.763003 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-ring-rebalance-jxgc9"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.776032 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-29mtd"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.790480 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-29mtd"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.811970 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-b679z"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.818033 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-blnpq"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.821435 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-gtg97_f7359aec-58b3-4254-8765-cdc131e5f912/openstack-network-exporter/0.log" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.821479 4804 generic.go:334] "Generic (PLEG): container finished" podID="f7359aec-58b3-4254-8765-cdc131e5f912" containerID="565156fe636372aa88e628b080b158f58c0e89d805ea93ee8a1f9e78b61b800b" exitCode=2 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.821552 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gtg97" event={"ID":"f7359aec-58b3-4254-8765-cdc131e5f912","Type":"ContainerDied","Data":"565156fe636372aa88e628b080b158f58c0e89d805ea93ee8a1f9e78b61b800b"} Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.834001 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jqk9s" event={"ID":"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8","Type":"ContainerStarted","Data":"59c191ec61924ab2b5f8fefd52ae2f9680b75391edc58ed63a1c1c209e71f63c"} Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.836876 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-7d4e-account-create-update-hrzrw"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.843123 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-b679z"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.850917 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-blnpq"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.871114 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-7d4e-account-create-update-hrzrw"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.874105 4804 generic.go:334] "Generic (PLEG): container finished" podID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" containerID="b7a5a299ea638aff1b67f737be31752dbc58e62ca2663d443117c669ac5e859a" exitCode=143 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.874160 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"04cc886c-66ef-4b91-87cf-1f9fe5de8081","Type":"ContainerDied","Data":"b7a5a299ea638aff1b67f737be31752dbc58e62ca2663d443117c669ac5e859a"} Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.899971 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-jqlrv"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.929416 4804 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="openstack/glance-default-internal-api-0" secret="" err="secret \"glance-glance-dockercfg-dv6zq\" not found" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.938311 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12849043-1f8e-4d1f-aae3-9cbc35ea4361" path="/var/lib/kubelet/pods/12849043-1f8e-4d1f-aae3-9cbc35ea4361/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.938928 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2baa2aa0-600d-4728-bb8c-7fee05022658" path="/var/lib/kubelet/pods/2baa2aa0-600d-4728-bb8c-7fee05022658/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.939523 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38148c07-9662-4f0b-8285-a02633a7cd37" path="/var/lib/kubelet/pods/38148c07-9662-4f0b-8285-a02633a7cd37/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.940204 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bd4fedc-8940-48ad-b718-4fbb98e48bf0" path="/var/lib/kubelet/pods/3bd4fedc-8940-48ad-b718-4fbb98e48bf0/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.943959 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57723f90-020a-42b7-ad6c-49e998417f27" path="/var/lib/kubelet/pods/57723f90-020a-42b7-ad6c-49e998417f27/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.944673 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b292a47-f331-472d-941e-193e41fee49f" path="/var/lib/kubelet/pods/6b292a47-f331-472d-941e-193e41fee49f/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.947254 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72b9a8c6-1dc2-4083-9cbe-0564721ef7bf" path="/var/lib/kubelet/pods/72b9a8c6-1dc2-4083-9cbe-0564721ef7bf/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.947930 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="99ffbce9-a3f3-4012-861a-fae498510fde" path="/var/lib/kubelet/pods/99ffbce9-a3f3-4012-861a-fae498510fde/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.953354 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ba69153d-cb1a-4a90-b52a-19ecc0f5b77a" path="/var/lib/kubelet/pods/ba69153d-cb1a-4a90-b52a-19ecc0f5b77a/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.954076 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf79509c-10e0-4ebc-a55d-e46f5497e2fd" path="/var/lib/kubelet/pods/bf79509c-10e0-4ebc-a55d-e46f5497e2fd/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.954689 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb46a04b-0e73-46fb-bcdf-a670c30d5531" path="/var/lib/kubelet/pods/cb46a04b-0e73-46fb-bcdf-a670c30d5531/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.958152 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb" path="/var/lib/kubelet/pods/cba1c5e0-be6b-4f9e-8fd6-97cba13b2fcb/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.958794 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5916f11-436f-46f9-b76e-304aa86f91a1" path="/var/lib/kubelet/pods/d5916f11-436f-46f9-b76e-304aa86f91a1/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.959688 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da587a6a-8109-4c08-8395-f4cd6b078dc7" path="/var/lib/kubelet/pods/da587a6a-8109-4c08-8395-f4cd6b078dc7/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.960249 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e541b2a6-870f-4829-bdfc-ad3e4368ec0b" path="/var/lib/kubelet/pods/e541b2a6-870f-4829-bdfc-ad3e4368ec0b/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.965712 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f76909b5-2ed7-476f-8f90-d8c9d168af6d" path="/var/lib/kubelet/pods/f76909b5-2ed7-476f-8f90-d8c9d168af6d/volumes" Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.966325 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-jqlrv"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.966355 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.966370 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.966578 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5198da96-d6b6-4b80-bb93-838dff10730e" containerName="glance-log" containerID="cri-o://49f9909506d8a2c0b51ffca97a1e1ce6efc0b0acde0bbf32f3a77e33e0c7d096" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.967024 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="5198da96-d6b6-4b80-bb93-838dff10730e" containerName="glance-httpd" containerID="cri-o://ea5cc70522b8b244db30a0a1dd5bc4353ad8899e579dd2e9b1384915ac35e91e" gracePeriod=30 Jan 28 11:45:32 crc kubenswrapper[4804]: W0128 11:45:32.985583 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12271c96_a234_46d8_bc32_80db78339116.slice/crio-892941696eeaa5b4b2629f40cb73dfe018b72b2636b79c4b93c9f6f01ce6184e WatchSource:0}: Error finding container 892941696eeaa5b4b2629f40cb73dfe018b72b2636b79c4b93c9f6f01ce6184e: Status 404 returned error can't find the container with id 892941696eeaa5b4b2629f40cb73dfe018b72b2636b79c4b93c9f6f01ce6184e Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.986576 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb" exitCode=0 Jan 28 11:45:32 crc kubenswrapper[4804]: I0128 11:45:32.986705 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb"} Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:32.996706 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2 is running failed: container process not found" containerID="1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:32.997774 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2 is running failed: container process not found" containerID="1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:32.998302 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2 is running failed: container process not found" containerID="1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2" cmd=["/usr/bin/pidof","ovsdb-server"] Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:32.998334 4804 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2 is running failed: container process not found" probeType="Readiness" pod="openstack/ovsdbserver-sb-0" podUID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" containerName="ovsdbserver-sb" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:32.998573 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c6c76352-2487-4098-bbee-579834052292/ovsdbserver-nb/0.log" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:32.998603 4804 generic.go:334] "Generic (PLEG): container finished" podID="c6c76352-2487-4098-bbee-579834052292" containerID="083be3913b9cea293776996ed70c579f5b987734d7d6618ce37907eb76d96885" exitCode=2 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:32.998628 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c6c76352-2487-4098-bbee-579834052292","Type":"ContainerDied","Data":"083be3913b9cea293776996ed70c579f5b987734d7d6618ce37907eb76d96885"} Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.022269 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-zvgmg"] Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.032470 4804 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 28 11:45:33 crc kubenswrapper[4804]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: if [ -n "barbican" ]; then Jan 28 11:45:33 crc kubenswrapper[4804]: GRANT_DATABASE="barbican" Jan 28 11:45:33 crc kubenswrapper[4804]: else Jan 28 11:45:33 crc kubenswrapper[4804]: GRANT_DATABASE="*" Jan 28 11:45:33 crc kubenswrapper[4804]: fi Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: # going for maximum compatibility here: Jan 28 11:45:33 crc kubenswrapper[4804]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 28 11:45:33 crc kubenswrapper[4804]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 28 11:45:33 crc kubenswrapper[4804]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 28 11:45:33 crc kubenswrapper[4804]: # support updates Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: $MYSQL_CMD < logger="UnhandledError" Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.034991 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-8522-account-create-update-8fq2p" podUID="12271c96-a234-46d8-bc32-80db78339116" Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.082876 4804 secret.go:188] Couldn't get secret openstack/glance-scripts: secret "glance-scripts" not found Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.082959 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts podName:4f5cdaa9-8b1d-44b2-bfe6-d986f680327f nodeName:}" failed. No retries permitted until 2026-01-28 11:45:33.582945262 +0000 UTC m=+1409.377825246 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts") pod "glance-default-internal-api-0" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f") : secret "glance-scripts" not found Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.084005 4804 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.084081 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data podName:f7c5c969-c4c2-4f76-b3c6-152473159e78 nodeName:}" failed. No retries permitted until 2026-01-28 11:45:35.084063447 +0000 UTC m=+1410.878943431 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data") pod "rabbitmq-server-0" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78") : configmap "rabbitmq-config-data" not found Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.084148 4804 secret.go:188] Couldn't get secret openstack/glance-default-internal-config-data: secret "glance-default-internal-config-data" not found Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.084173 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data podName:4f5cdaa9-8b1d-44b2-bfe6-d986f680327f nodeName:}" failed. No retries permitted until 2026-01-28 11:45:33.58416657 +0000 UTC m=+1409.379046554 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data") pod "glance-default-internal-api-0" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f") : secret "glance-default-internal-config-data" not found Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.116021 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-zvgmg"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.129931 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.154936 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-ea29-account-create-update-fd9sb"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.164590 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/root-account-create-update-jqk9s"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.195212 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-ea29-account-create-update-fd9sb"] Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.201054 4804 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 28 11:45:33 crc kubenswrapper[4804]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 28 11:45:33 crc kubenswrapper[4804]: + source /usr/local/bin/container-scripts/functions Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNBridge=br-int Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNRemote=tcp:localhost:6642 Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNEncapType=geneve Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNAvailabilityZones= Jan 28 11:45:33 crc kubenswrapper[4804]: ++ EnableChassisAsGateway=true Jan 28 11:45:33 crc kubenswrapper[4804]: ++ PhysicalNetworks= Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNHostName= Jan 28 11:45:33 crc kubenswrapper[4804]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 28 11:45:33 crc kubenswrapper[4804]: ++ ovs_dir=/var/lib/openvswitch Jan 28 11:45:33 crc kubenswrapper[4804]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 28 11:45:33 crc kubenswrapper[4804]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 28 11:45:33 crc kubenswrapper[4804]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + sleep 0.5 Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + sleep 0.5 Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + sleep 0.5 Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + cleanup_ovsdb_server_semaphore Jan 28 11:45:33 crc kubenswrapper[4804]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 28 11:45:33 crc kubenswrapper[4804]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 28 11:45:33 crc kubenswrapper[4804]: > execCommand=["/usr/local/bin/container-scripts/stop-ovsdb-server.sh"] containerName="ovsdb-server" pod="openstack/ovn-controller-ovs-pfzkj" message=< Jan 28 11:45:33 crc kubenswrapper[4804]: Exiting ovsdb-server (5) [ OK ] Jan 28 11:45:33 crc kubenswrapper[4804]: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 28 11:45:33 crc kubenswrapper[4804]: + source /usr/local/bin/container-scripts/functions Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNBridge=br-int Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNRemote=tcp:localhost:6642 Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNEncapType=geneve Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNAvailabilityZones= Jan 28 11:45:33 crc kubenswrapper[4804]: ++ EnableChassisAsGateway=true Jan 28 11:45:33 crc kubenswrapper[4804]: ++ PhysicalNetworks= Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNHostName= Jan 28 11:45:33 crc kubenswrapper[4804]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 28 11:45:33 crc kubenswrapper[4804]: ++ ovs_dir=/var/lib/openvswitch Jan 28 11:45:33 crc kubenswrapper[4804]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 28 11:45:33 crc kubenswrapper[4804]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 28 11:45:33 crc kubenswrapper[4804]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + sleep 0.5 Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + sleep 0.5 Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + sleep 0.5 Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + cleanup_ovsdb_server_semaphore Jan 28 11:45:33 crc kubenswrapper[4804]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 28 11:45:33 crc kubenswrapper[4804]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 28 11:45:33 crc kubenswrapper[4804]: > Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.201091 4804 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 28 11:45:33 crc kubenswrapper[4804]: command '/usr/local/bin/container-scripts/stop-ovsdb-server.sh' exited with 137: ++ dirname /usr/local/bin/container-scripts/stop-ovsdb-server.sh Jan 28 11:45:33 crc kubenswrapper[4804]: + source /usr/local/bin/container-scripts/functions Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNBridge=br-int Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNRemote=tcp:localhost:6642 Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNEncapType=geneve Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNAvailabilityZones= Jan 28 11:45:33 crc kubenswrapper[4804]: ++ EnableChassisAsGateway=true Jan 28 11:45:33 crc kubenswrapper[4804]: ++ PhysicalNetworks= Jan 28 11:45:33 crc kubenswrapper[4804]: ++ OVNHostName= Jan 28 11:45:33 crc kubenswrapper[4804]: ++ DB_FILE=/etc/openvswitch/conf.db Jan 28 11:45:33 crc kubenswrapper[4804]: ++ ovs_dir=/var/lib/openvswitch Jan 28 11:45:33 crc kubenswrapper[4804]: ++ FLOWS_RESTORE_SCRIPT=/var/lib/openvswitch/flows-script Jan 28 11:45:33 crc kubenswrapper[4804]: ++ FLOWS_RESTORE_DIR=/var/lib/openvswitch/saved-flows Jan 28 11:45:33 crc kubenswrapper[4804]: ++ SAFE_TO_STOP_OVSDB_SERVER_SEMAPHORE=/var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + sleep 0.5 Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + sleep 0.5 Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + sleep 0.5 Jan 28 11:45:33 crc kubenswrapper[4804]: + '[' '!' -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server ']' Jan 28 11:45:33 crc kubenswrapper[4804]: + cleanup_ovsdb_server_semaphore Jan 28 11:45:33 crc kubenswrapper[4804]: + rm -f /var/lib/openvswitch/is_safe_to_stop_ovsdb_server Jan 28 11:45:33 crc kubenswrapper[4804]: + /usr/share/openvswitch/scripts/ovs-ctl stop --no-ovs-vswitchd Jan 28 11:45:33 crc kubenswrapper[4804]: > pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server" containerID="cri-o://b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.201123 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server" containerID="cri-o://b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" gracePeriod=28 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.234210 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7d88fd9b89-w66bx"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.234825 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7d88fd9b89-w66bx" podUID="095bc753-88c4-456c-a3ae-aa0040a76338" containerName="neutron-api" containerID="cri-o://5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.235498 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-7d88fd9b89-w66bx" podUID="095bc753-88c4-456c-a3ae-aa0040a76338" containerName="neutron-httpd" containerID="cri-o://789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.271960 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="76d127f1-97d9-4552-9bdb-b3482a45951d" containerName="rabbitmq" containerID="cri-o://a7bcd4c4937ab18a41cb4959a39743e78382843e721b78db4c0a6c20de518e0c" gracePeriod=604800 Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.285079 4804 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 28 11:45:33 crc kubenswrapper[4804]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: if [ -n "nova_api" ]; then Jan 28 11:45:33 crc kubenswrapper[4804]: GRANT_DATABASE="nova_api" Jan 28 11:45:33 crc kubenswrapper[4804]: else Jan 28 11:45:33 crc kubenswrapper[4804]: GRANT_DATABASE="*" Jan 28 11:45:33 crc kubenswrapper[4804]: fi Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: # going for maximum compatibility here: Jan 28 11:45:33 crc kubenswrapper[4804]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 28 11:45:33 crc kubenswrapper[4804]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 28 11:45:33 crc kubenswrapper[4804]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 28 11:45:33 crc kubenswrapper[4804]: # support updates Jan 28 11:45:33 crc kubenswrapper[4804]: Jan 28 11:45:33 crc kubenswrapper[4804]: $MYSQL_CMD < logger="UnhandledError" Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.286238 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-0c6f-account-create-update-hhm9c" podUID="8933c7a4-1e24-4de2-b302-1be9bc3c1e2d" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.369747 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-kcr62"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.389429 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.401035 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-kcr62"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.414101 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.414355 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-log" containerID="cri-o://5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.414805 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-metadata" containerID="cri-o://f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.434086 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-8f675b957-rm9qp"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.434395 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-8f675b957-rm9qp" podUID="878daeff-34bf-4dab-8118-e42c318849bb" containerName="barbican-worker-log" containerID="cri-o://f4726c69a403b9a8eefc4f17886ef00a383e10ea26adf572bdfed7ea1d3723a8" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.435168 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-worker-8f675b957-rm9qp" podUID="878daeff-34bf-4dab-8118-e42c318849bb" containerName="barbican-worker" containerID="cri-o://1144f29504fe6195fc342eb320a4b830871f3e9d0216c4ce9fc167121dce473e" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.460690 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovs-vswitchd" containerID="cri-o://27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" gracePeriod=28 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.460839 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7bd5b5bf44-5z4wx"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.461087 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" podUID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerName="barbican-api-log" containerID="cri-o://55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.461704 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" podUID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerName="barbican-api" containerID="cri-o://8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.476112 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-w8q7w"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.487083 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-w8q7w"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.494908 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-n6kfg"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.501850 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-n6kfg"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.509804 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5f7496d4bd-26fnt"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.510260 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" podUID="82ef8b43-de59-45f8-9c2a-765c5709054b" containerName="barbican-keystone-listener-log" containerID="cri-o://1fe685e535efd281a9b4cf9713641d9161c23425d8abe0134248a2395c6b7208" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.510918 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" podUID="82ef8b43-de59-45f8-9c2a-765c5709054b" containerName="barbican-keystone-listener" containerID="cri-o://bcbdcf39ea5a39e34418c6ab9208339d9f7fde2eca3c37cbb5806710252cf88b" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.545258 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8522-account-create-update-8fq2p"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.548108 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-vmdbt"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.578199 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-vmdbt"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.588417 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.588764 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerName="nova-api-log" containerID="cri-o://61e2afaf3a01a165673d5c42d38ef739fe857b1c1e03f62547614151ae809226" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.589025 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerName="nova-api-api" containerID="cri-o://5fce3701f770e3f1d822ae3950ad420da6c9b44d0df68cf4bd4c8ebf86d62649" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.593463 4804 secret.go:188] Couldn't get secret openstack/glance-default-internal-config-data: secret "glance-default-internal-config-data" not found Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.593514 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data podName:4f5cdaa9-8b1d-44b2-bfe6-d986f680327f nodeName:}" failed. No retries permitted until 2026-01-28 11:45:34.593500658 +0000 UTC m=+1410.388380642 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data") pod "glance-default-internal-api-0" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f") : secret "glance-default-internal-config-data" not found Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.593660 4804 secret.go:188] Couldn't get secret openstack/glance-scripts: secret "glance-scripts" not found Jan 28 11:45:33 crc kubenswrapper[4804]: E0128 11:45:33.593723 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts podName:4f5cdaa9-8b1d-44b2-bfe6-d986f680327f nodeName:}" failed. No retries permitted until 2026-01-28 11:45:34.593703914 +0000 UTC m=+1410.388583898 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts") pod "glance-default-internal-api-0" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f") : secret "glance-scripts" not found Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.595793 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-mw42v"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.608640 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.608950 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="b390f543-98da-46ea-b3b9-f68c09d94c03" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://67b5e53f1eb1c67a490461931a62e093efa88d74afa9352d1282f6ea7d2e449a" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.616724 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-mw42v"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.623344 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-gtg97_f7359aec-58b3-4254-8765-cdc131e5f912/openstack-network-exporter/0.log" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.623415 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.625153 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.635495 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-x5xnt"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.636152 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-cell1-galera-0" podUID="24549b02-2977-49ee-8f25-a6ed25e523d1" containerName="galera" containerID="cri-o://351711c020c75334855ec428e2d1987910c3ce0fc9fe965d8ca2c554f8fb0ae9" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.643621 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0c6f-account-create-update-hhm9c"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.653771 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.672441 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-x5xnt"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.680980 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="f7c5c969-c4c2-4f76-b3c6-152473159e78" containerName="rabbitmq" containerID="cri-o://95dfda03211e6c344c512015a17826e376bdb3ad7fb59bc5821bb495def03e2b" gracePeriod=604800 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.694126 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8522-account-create-update-8fq2p"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.702992 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.703228 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="469a0049-480f-4cde-848d-4b11cb54130b" containerName="nova-scheduler-scheduler" containerID="cri-o://df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.719319 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0c6f-account-create-update-hhm9c"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.796667 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fk9c\" (UniqueName: \"kubernetes.io/projected/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-kube-api-access-5fk9c\") pod \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.796794 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6kjd\" (UniqueName: \"kubernetes.io/projected/f7359aec-58b3-4254-8765-cdc131e5f912-kube-api-access-d6kjd\") pod \"f7359aec-58b3-4254-8765-cdc131e5f912\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.796839 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config-secret\") pod \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.796860 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config\") pod \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.796928 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovn-rundir\") pod \"f7359aec-58b3-4254-8765-cdc131e5f912\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.796995 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7359aec-58b3-4254-8765-cdc131e5f912-config\") pod \"f7359aec-58b3-4254-8765-cdc131e5f912\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.797019 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-metrics-certs-tls-certs\") pod \"f7359aec-58b3-4254-8765-cdc131e5f912\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.797058 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovs-rundir\") pod \"f7359aec-58b3-4254-8765-cdc131e5f912\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.797074 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-combined-ca-bundle\") pod \"f7359aec-58b3-4254-8765-cdc131e5f912\" (UID: \"f7359aec-58b3-4254-8765-cdc131e5f912\") " Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.797128 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-combined-ca-bundle\") pod \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\" (UID: \"eaba1c3c-49d4-498e-94b8-9c8cbe8660da\") " Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.797765 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "f7359aec-58b3-4254-8765-cdc131e5f912" (UID: "f7359aec-58b3-4254-8765-cdc131e5f912"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.797814 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovs-rundir" (OuterVolumeSpecName: "ovs-rundir") pod "f7359aec-58b3-4254-8765-cdc131e5f912" (UID: "f7359aec-58b3-4254-8765-cdc131e5f912"). InnerVolumeSpecName "ovs-rundir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.798706 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7359aec-58b3-4254-8765-cdc131e5f912-config" (OuterVolumeSpecName: "config") pod "f7359aec-58b3-4254-8765-cdc131e5f912" (UID: "f7359aec-58b3-4254-8765-cdc131e5f912"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.815295 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-kube-api-access-5fk9c" (OuterVolumeSpecName: "kube-api-access-5fk9c") pod "eaba1c3c-49d4-498e-94b8-9c8cbe8660da" (UID: "eaba1c3c-49d4-498e-94b8-9c8cbe8660da"). InnerVolumeSpecName "kube-api-access-5fk9c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.820023 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7359aec-58b3-4254-8765-cdc131e5f912-kube-api-access-d6kjd" (OuterVolumeSpecName: "kube-api-access-d6kjd") pod "f7359aec-58b3-4254-8765-cdc131e5f912" (UID: "f7359aec-58b3-4254-8765-cdc131e5f912"). InnerVolumeSpecName "kube-api-access-d6kjd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.863566 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "eaba1c3c-49d4-498e-94b8-9c8cbe8660da" (UID: "eaba1c3c-49d4-498e-94b8-9c8cbe8660da"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.872241 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7359aec-58b3-4254-8765-cdc131e5f912" (UID: "f7359aec-58b3-4254-8765-cdc131e5f912"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.900152 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5fk9c\" (UniqueName: \"kubernetes.io/projected/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-kube-api-access-5fk9c\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.900183 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6kjd\" (UniqueName: \"kubernetes.io/projected/f7359aec-58b3-4254-8765-cdc131e5f912-kube-api-access-d6kjd\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.900192 4804 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.900201 4804 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.900209 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7359aec-58b3-4254-8765-cdc131e5f912-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.900219 4804 reconciler_common.go:293] "Volume detached for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/f7359aec-58b3-4254-8765-cdc131e5f912-ovs-rundir\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.900226 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.938342 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.938843 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-conductor-0" podUID="8e88e9db-b96d-4009-a4e6-ccbb5be53f85" containerName="nova-cell1-conductor-conductor" containerID="cri-o://87c8a05a13e5c4994ae379707a39a074a0eebbe05ff9792d9fd8e8f442678955" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.948775 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t5xcd"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.955369 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-t5xcd"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.966962 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qbth2"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.968036 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.968290 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell0-conductor-0" podUID="8cb48af9-edd2-404a-9d56-afedbfa79f07" containerName="nova-cell0-conductor-conductor" containerID="cri-o://fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441" gracePeriod=30 Jan 28 11:45:33 crc kubenswrapper[4804]: I0128 11:45:33.975002 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-qbth2"] Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.016043 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "eaba1c3c-49d4-498e-94b8-9c8cbe8660da" (UID: "eaba1c3c-49d4-498e-94b8-9c8cbe8660da"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.016513 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "eaba1c3c-49d4-498e-94b8-9c8cbe8660da" (UID: "eaba1c3c-49d4-498e-94b8-9c8cbe8660da"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.018184 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-gtg97_f7359aec-58b3-4254-8765-cdc131e5f912/openstack-network-exporter/0.log" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.018285 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-gtg97" event={"ID":"f7359aec-58b3-4254-8765-cdc131e5f912","Type":"ContainerDied","Data":"79be63495c588808e23a67b45c37537f9b6477c73ecd4b8dd566e47b4bed3b9d"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.018338 4804 scope.go:117] "RemoveContainer" containerID="565156fe636372aa88e628b080b158f58c0e89d805ea93ee8a1f9e78b61b800b" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.019414 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-gtg97" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.042803 4804 generic.go:334] "Generic (PLEG): container finished" podID="5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" containerID="c1345bf2b60adbef9b806636ee3887a5869fd85c14cb9679c394104f26a95a2c" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.042843 4804 generic.go:334] "Generic (PLEG): container finished" podID="5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" containerID="005c93d53e10abe220c87f4440097a40fcd2ee8a29f58966418aa864a302e6f7" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.042925 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e","Type":"ContainerDied","Data":"c1345bf2b60adbef9b806636ee3887a5869fd85c14cb9679c394104f26a95a2c"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.042951 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e","Type":"ContainerDied","Data":"005c93d53e10abe220c87f4440097a40fcd2ee8a29f58966418aa864a302e6f7"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.047153 4804 generic.go:334] "Generic (PLEG): container finished" podID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerID="61e2afaf3a01a165673d5c42d38ef739fe857b1c1e03f62547614151ae809226" exitCode=143 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.047198 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae0fb199-797a-40c6-8c71-3b5a976b6c61","Type":"ContainerDied","Data":"61e2afaf3a01a165673d5c42d38ef739fe857b1c1e03f62547614151ae809226"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.054679 4804 generic.go:334] "Generic (PLEG): container finished" podID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerID="55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389" exitCode=143 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.054743 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" event={"ID":"bb3c1e4d-637e-4de6-aa37-7daff5298b30","Type":"ContainerDied","Data":"55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.063688 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c6c76352-2487-4098-bbee-579834052292/ovsdbserver-nb/0.log" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.063733 4804 generic.go:334] "Generic (PLEG): container finished" podID="c6c76352-2487-4098-bbee-579834052292" containerID="445cd2aea23cf7159b1e5bbce268d62cd1c9a1d5072f21d98a9181a420bf2e56" exitCode=143 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.063784 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c6c76352-2487-4098-bbee-579834052292","Type":"ContainerDied","Data":"445cd2aea23cf7159b1e5bbce268d62cd1c9a1d5072f21d98a9181a420bf2e56"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.063926 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"c6c76352-2487-4098-bbee-579834052292","Type":"ContainerDied","Data":"d26150d6a52bf056130226acaed9cb7292060f7026f2494687c5bc4ee4c04771"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.063955 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d26150d6a52bf056130226acaed9cb7292060f7026f2494687c5bc4ee4c04771" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.066475 4804 generic.go:334] "Generic (PLEG): container finished" podID="be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" containerID="45f239fc147b42454bdb77cdc16602cd03b54af32ff3e4a9b380a4fde2275f5c" exitCode=1 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.066527 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jqk9s" event={"ID":"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8","Type":"ContainerDied","Data":"45f239fc147b42454bdb77cdc16602cd03b54af32ff3e4a9b380a4fde2275f5c"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.067545 4804 scope.go:117] "RemoveContainer" containerID="45f239fc147b42454bdb77cdc16602cd03b54af32ff3e4a9b380a4fde2275f5c" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.076434 4804 generic.go:334] "Generic (PLEG): container finished" podID="eaba1c3c-49d4-498e-94b8-9c8cbe8660da" containerID="0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67" exitCode=137 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.076656 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.086334 4804 generic.go:334] "Generic (PLEG): container finished" podID="f7cab05f-efa6-4a74-920b-96f8f30f1736" containerID="91cc51ff2b7594ba6b7c5b83ef291bdad1767dd300aa27e2d6fe9a547161ad93" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.087357 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" event={"ID":"f7cab05f-efa6-4a74-920b-96f8f30f1736","Type":"ContainerDied","Data":"91cc51ff2b7594ba6b7c5b83ef291bdad1767dd300aa27e2d6fe9a547161ad93"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.087407 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" event={"ID":"f7cab05f-efa6-4a74-920b-96f8f30f1736","Type":"ContainerDied","Data":"02c33a94fee5850cffdcc1376e17adfb105d8ad41566dcf54330f1591b79ad5e"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.087422 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02c33a94fee5850cffdcc1376e17adfb105d8ad41566dcf54330f1591b79ad5e" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.091573 4804 generic.go:334] "Generic (PLEG): container finished" podID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerID="54143a992b19966c4a0488e9860a42a6b4166527e948b9a61cc651fb19353896" exitCode=143 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.091630 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659f7cffd6-wm9cj" event={"ID":"280cd1a0-6761-425c-8de1-bec2307ba0c0","Type":"ContainerDied","Data":"54143a992b19966c4a0488e9860a42a6b4166527e948b9a61cc651fb19353896"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.094761 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_50c4ac86-3241-4cd1-aa15-9a36b6be1e03/ovsdbserver-sb/0.log" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.094797 4804 generic.go:334] "Generic (PLEG): container finished" podID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" containerID="7501d75daa32f7ac9da494ff4510c6c7b84e72c6cd5d7a36b873ba97e31ca357" exitCode=2 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.094810 4804 generic.go:334] "Generic (PLEG): container finished" podID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" containerID="1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2" exitCode=143 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.094845 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"50c4ac86-3241-4cd1-aa15-9a36b6be1e03","Type":"ContainerDied","Data":"7501d75daa32f7ac9da494ff4510c6c7b84e72c6cd5d7a36b873ba97e31ca357"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.094866 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"50c4ac86-3241-4cd1-aa15-9a36b6be1e03","Type":"ContainerDied","Data":"1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.094875 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"50c4ac86-3241-4cd1-aa15-9a36b6be1e03","Type":"ContainerDied","Data":"25c9a781686743f7412ee94f0767d676a774f06512184aef56e510538efe72e7"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.094943 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="25c9a781686743f7412ee94f0767d676a774f06512184aef56e510538efe72e7" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.097958 4804 generic.go:334] "Generic (PLEG): container finished" podID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerID="5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06" exitCode=143 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.098010 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0bfaf6b-2c74-4812-965a-4db80f0c4527","Type":"ContainerDied","Data":"5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.103813 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.103838 4804 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eaba1c3c-49d4-498e-94b8-9c8cbe8660da-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.104662 4804 generic.go:334] "Generic (PLEG): container finished" podID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.104735 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pfzkj" event={"ID":"9d301959-ed06-4b22-8e97-f3fc9a9bc491","Type":"ContainerDied","Data":"b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.116085 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0c6f-account-create-update-hhm9c" event={"ID":"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d","Type":"ContainerStarted","Data":"314f7cccb05be770227b06402be77c91999b3a0c06e5100b791025a241c569ff"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.140281 4804 generic.go:334] "Generic (PLEG): container finished" podID="878daeff-34bf-4dab-8118-e42c318849bb" containerID="f4726c69a403b9a8eefc4f17886ef00a383e10ea26adf572bdfed7ea1d3723a8" exitCode=143 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.140388 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8f675b957-rm9qp" event={"ID":"878daeff-34bf-4dab-8118-e42c318849bb","Type":"ContainerDied","Data":"f4726c69a403b9a8eefc4f17886ef00a383e10ea26adf572bdfed7ea1d3723a8"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.149080 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-cell1-novncproxy-0" podUID="b390f543-98da-46ea-b3b9-f68c09d94c03" containerName="nova-cell1-novncproxy-novncproxy" probeResult="failure" output="Get \"https://10.217.0.201:6080/vnc_lite.html\": dial tcp 10.217.0.201:6080: connect: connection refused" Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.158771 4804 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 28 11:45:34 crc kubenswrapper[4804]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: if [ -n "nova_api" ]; then Jan 28 11:45:34 crc kubenswrapper[4804]: GRANT_DATABASE="nova_api" Jan 28 11:45:34 crc kubenswrapper[4804]: else Jan 28 11:45:34 crc kubenswrapper[4804]: GRANT_DATABASE="*" Jan 28 11:45:34 crc kubenswrapper[4804]: fi Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: # going for maximum compatibility here: Jan 28 11:45:34 crc kubenswrapper[4804]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 28 11:45:34 crc kubenswrapper[4804]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 28 11:45:34 crc kubenswrapper[4804]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 28 11:45:34 crc kubenswrapper[4804]: # support updates Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: $MYSQL_CMD < logger="UnhandledError" Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.159839 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"nova-api-db-secret\\\" not found\"" pod="openstack/nova-api-0c6f-account-create-update-hhm9c" podUID="8933c7a4-1e24-4de2-b302-1be9bc3c1e2d" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.208156 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "f7359aec-58b3-4254-8765-cdc131e5f912" (UID: "f7359aec-58b3-4254-8765-cdc131e5f912"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209614 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209634 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209641 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209647 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209654 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209659 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209665 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209671 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209677 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209683 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209688 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209694 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209700 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209736 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209760 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209770 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209778 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209786 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209794 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209802 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209810 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209818 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209828 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209836 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209845 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.209854 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.211167 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8522-account-create-update-8fq2p" event={"ID":"12271c96-a234-46d8-bc32-80db78339116","Type":"ContainerStarted","Data":"892941696eeaa5b4b2629f40cb73dfe018b72b2636b79c4b93c9f6f01ce6184e"} Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.216934 4804 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 28 11:45:34 crc kubenswrapper[4804]: container &Container{Name:mariadb-account-create-update,Image:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,Command:[/bin/sh -c #!/bin/bash Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: MYSQL_REMOTE_HOST="" source /var/lib/operator-scripts/mysql_root_auth.sh Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: export DatabasePassword=${DatabasePassword:?"Please specify a DatabasePassword variable."} Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: MYSQL_CMD="mysql -h -u root -P 3306" Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: if [ -n "barbican" ]; then Jan 28 11:45:34 crc kubenswrapper[4804]: GRANT_DATABASE="barbican" Jan 28 11:45:34 crc kubenswrapper[4804]: else Jan 28 11:45:34 crc kubenswrapper[4804]: GRANT_DATABASE="*" Jan 28 11:45:34 crc kubenswrapper[4804]: fi Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: # going for maximum compatibility here: Jan 28 11:45:34 crc kubenswrapper[4804]: # 1. MySQL 8 no longer allows implicit create user when GRANT is used Jan 28 11:45:34 crc kubenswrapper[4804]: # 2. MariaDB has "CREATE OR REPLACE", but MySQL does not Jan 28 11:45:34 crc kubenswrapper[4804]: # 3. create user with CREATE but then do all password and TLS with ALTER to Jan 28 11:45:34 crc kubenswrapper[4804]: # support updates Jan 28 11:45:34 crc kubenswrapper[4804]: Jan 28 11:45:34 crc kubenswrapper[4804]: $MYSQL_CMD < logger="UnhandledError" Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.218943 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CreateContainerConfigError: \"secret \\\"barbican-db-secret\\\" not found\"" pod="openstack/barbican-8522-account-create-update-8fq2p" podUID="12271c96-a234-46d8-bc32-80db78339116" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.231720 4804 generic.go:334] "Generic (PLEG): container finished" podID="5198da96-d6b6-4b80-bb93-838dff10730e" containerID="49f9909506d8a2c0b51ffca97a1e1ce6efc0b0acde0bbf32f3a77e33e0c7d096" exitCode=143 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.231868 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5198da96-d6b6-4b80-bb93-838dff10730e","Type":"ContainerDied","Data":"49f9909506d8a2c0b51ffca97a1e1ce6efc0b0acde0bbf32f3a77e33e0c7d096"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.236985 4804 generic.go:334] "Generic (PLEG): container finished" podID="82ef8b43-de59-45f8-9c2a-765c5709054b" containerID="1fe685e535efd281a9b4cf9713641d9161c23425d8abe0134248a2395c6b7208" exitCode=143 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.237077 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" event={"ID":"82ef8b43-de59-45f8-9c2a-765c5709054b","Type":"ContainerDied","Data":"1fe685e535efd281a9b4cf9713641d9161c23425d8abe0134248a2395c6b7208"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.238441 4804 generic.go:334] "Generic (PLEG): container finished" podID="095bc753-88c4-456c-a3ae-aa0040a76338" containerID="789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f" exitCode=0 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.238609 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" containerName="glance-log" containerID="cri-o://53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001" gracePeriod=30 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.238834 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d88fd9b89-w66bx" event={"ID":"095bc753-88c4-456c-a3ae-aa0040a76338","Type":"ContainerDied","Data":"789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f"} Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.239195 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" containerName="glance-httpd" containerID="cri-o://9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63" gracePeriod=30 Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.307517 4804 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7359aec-58b3-4254-8765-cdc131e5f912-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.384858 4804 scope.go:117] "RemoveContainer" containerID="0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.418405 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_c6c76352-2487-4098-bbee-579834052292/ovsdbserver-nb/0.log" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.418475 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.461142 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_50c4ac86-3241-4cd1-aa15-9a36b6be1e03/ovsdbserver-sb/0.log" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.490151 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.507572 4804 scope.go:117] "RemoveContainer" containerID="0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67" Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.508753 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67\": container with ID starting with 0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67 not found: ID does not exist" containerID="0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.508806 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67"} err="failed to get container status \"0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67\": rpc error: code = NotFound desc = could not find container \"0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67\": container with ID starting with 0f805615fb3e10ef958b3daefdf0d3d802fa701a9cf7dfdea194952874296d67 not found: ID does not exist" Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.512991 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.513266 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.516733 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.517298 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c6c76352-2487-4098-bbee-579834052292-ovsdb-rundir\") pod \"c6c76352-2487-4098-bbee-579834052292\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.517355 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9rrd\" (UniqueName: \"kubernetes.io/projected/c6c76352-2487-4098-bbee-579834052292-kube-api-access-x9rrd\") pod \"c6c76352-2487-4098-bbee-579834052292\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.517436 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-combined-ca-bundle\") pod \"c6c76352-2487-4098-bbee-579834052292\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.517516 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-ovsdbserver-nb-tls-certs\") pod \"c6c76352-2487-4098-bbee-579834052292\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.517807 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-config\") pod \"c6c76352-2487-4098-bbee-579834052292\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.517927 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-scripts\") pod \"c6c76352-2487-4098-bbee-579834052292\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.518581 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-scripts" (OuterVolumeSpecName: "scripts") pod "c6c76352-2487-4098-bbee-579834052292" (UID: "c6c76352-2487-4098-bbee-579834052292"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.518582 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-metrics-certs-tls-certs\") pod \"c6c76352-2487-4098-bbee-579834052292\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.518688 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-nb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"c6c76352-2487-4098-bbee-579834052292\" (UID: \"c6c76352-2487-4098-bbee-579834052292\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.520215 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.520936 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-config" (OuterVolumeSpecName: "config") pod "c6c76352-2487-4098-bbee-579834052292" (UID: "c6c76352-2487-4098-bbee-579834052292"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.558165 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "ovndbcluster-nb-etc-ovn") pod "c6c76352-2487-4098-bbee-579834052292" (UID: "c6c76352-2487-4098-bbee-579834052292"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.558622 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6c76352-2487-4098-bbee-579834052292-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "c6c76352-2487-4098-bbee-579834052292" (UID: "c6c76352-2487-4098-bbee-579834052292"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.564758 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.564813 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="469a0049-480f-4cde-848d-4b11cb54130b" containerName="nova-scheduler-scheduler" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.572868 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6c76352-2487-4098-bbee-579834052292-kube-api-access-x9rrd" (OuterVolumeSpecName: "kube-api-access-x9rrd") pod "c6c76352-2487-4098-bbee-579834052292" (UID: "c6c76352-2487-4098-bbee-579834052292"). InnerVolumeSpecName "kube-api-access-x9rrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.585127 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.585219 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-metrics-gtg97"] Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.614895 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "c6c76352-2487-4098-bbee-579834052292" (UID: "c6c76352-2487-4098-bbee-579834052292"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628318 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-sb\") pod \"f7cab05f-efa6-4a74-920b-96f8f30f1736\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628378 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-swift-storage-0\") pod \"f7cab05f-efa6-4a74-920b-96f8f30f1736\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628463 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-scripts\") pod \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628518 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-combined-ca-bundle\") pod \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628593 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-metrics-certs-tls-certs\") pod \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628628 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndbcluster-sb-etc-ovn\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628662 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mpf5\" (UniqueName: \"kubernetes.io/projected/f7cab05f-efa6-4a74-920b-96f8f30f1736-kube-api-access-4mpf5\") pod \"f7cab05f-efa6-4a74-920b-96f8f30f1736\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628686 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-config\") pod \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628709 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-config\") pod \"f7cab05f-efa6-4a74-920b-96f8f30f1736\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628753 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-svc\") pod \"f7cab05f-efa6-4a74-920b-96f8f30f1736\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628861 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdb-rundir\") pod \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628900 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp6hq\" (UniqueName: \"kubernetes.io/projected/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-kube-api-access-lp6hq\") pod \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628930 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdbserver-sb-tls-certs\") pod \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\" (UID: \"50c4ac86-3241-4cd1-aa15-9a36b6be1e03\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.628952 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-nb\") pod \"f7cab05f-efa6-4a74-920b-96f8f30f1736\" (UID: \"f7cab05f-efa6-4a74-920b-96f8f30f1736\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.629400 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6c76352-2487-4098-bbee-579834052292-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.629432 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.629444 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/c6c76352-2487-4098-bbee-579834052292-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.629458 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9rrd\" (UniqueName: \"kubernetes.io/projected/c6c76352-2487-4098-bbee-579834052292-kube-api-access-x9rrd\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.629469 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.630920 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-config" (OuterVolumeSpecName: "config") pod "50c4ac86-3241-4cd1-aa15-9a36b6be1e03" (UID: "50c4ac86-3241-4cd1-aa15-9a36b6be1e03"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.637413 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-scripts" (OuterVolumeSpecName: "scripts") pod "50c4ac86-3241-4cd1-aa15-9a36b6be1e03" (UID: "50c4ac86-3241-4cd1-aa15-9a36b6be1e03"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.637482 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-metrics-gtg97"] Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.643846 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdb-rundir" (OuterVolumeSpecName: "ovsdb-rundir") pod "50c4ac86-3241-4cd1-aa15-9a36b6be1e03" (UID: "50c4ac86-3241-4cd1-aa15-9a36b6be1e03"). InnerVolumeSpecName "ovsdb-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.644300 4804 secret.go:188] Couldn't get secret openstack/glance-default-internal-config-data: secret "glance-default-internal-config-data" not found Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.644357 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data podName:4f5cdaa9-8b1d-44b2-bfe6-d986f680327f nodeName:}" failed. No retries permitted until 2026-01-28 11:45:36.644339636 +0000 UTC m=+1412.439219620 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data") pod "glance-default-internal-api-0" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f") : secret "glance-default-internal-config-data" not found Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.645018 4804 secret.go:188] Couldn't get secret openstack/glance-scripts: secret "glance-scripts" not found Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.645180 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts podName:4f5cdaa9-8b1d-44b2-bfe6-d986f680327f nodeName:}" failed. No retries permitted until 2026-01-28 11:45:36.645161581 +0000 UTC m=+1412.440041565 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts") pod "glance-default-internal-api-0" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f") : secret "glance-scripts" not found Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.674058 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-kube-api-access-lp6hq" (OuterVolumeSpecName: "kube-api-access-lp6hq") pod "50c4ac86-3241-4cd1-aa15-9a36b6be1e03" (UID: "50c4ac86-3241-4cd1-aa15-9a36b6be1e03"). InnerVolumeSpecName "kube-api-access-lp6hq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.704580 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7cab05f-efa6-4a74-920b-96f8f30f1736-kube-api-access-4mpf5" (OuterVolumeSpecName: "kube-api-access-4mpf5") pod "f7cab05f-efa6-4a74-920b-96f8f30f1736" (UID: "f7cab05f-efa6-4a74-920b-96f8f30f1736"). InnerVolumeSpecName "kube-api-access-4mpf5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.704643 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "ovndbcluster-sb-etc-ovn") pod "50c4ac86-3241-4cd1-aa15-9a36b6be1e03" (UID: "50c4ac86-3241-4cd1-aa15-9a36b6be1e03"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.730668 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-etc-machine-id\") pod \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.730751 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data\") pod \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.730774 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-combined-ca-bundle\") pod \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.730864 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j4htk\" (UniqueName: \"kubernetes.io/projected/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-kube-api-access-j4htk\") pod \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.730931 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-scripts\") pod \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.731011 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data-custom\") pod \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\" (UID: \"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e\") " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.731439 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdb-rundir\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.731468 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lp6hq\" (UniqueName: \"kubernetes.io/projected/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-kube-api-access-lp6hq\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.731479 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.731500 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.731509 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mpf5\" (UniqueName: \"kubernetes.io/projected/f7cab05f-efa6-4a74-920b-96f8f30f1736-kube-api-access-4mpf5\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.731517 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.734917 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/cinder-api-0" podUID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" containerName="cinder-api" probeResult="failure" output="Get \"https://10.217.0.168:8776/healthcheck\": read tcp 10.217.0.2:50254->10.217.0.168:8776: read: connection reset by peer" Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.735551 4804 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 28 11:45:34 crc kubenswrapper[4804]: E0128 11:45:34.735631 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data podName:76d127f1-97d9-4552-9bdb-b3482a45951d nodeName:}" failed. No retries permitted until 2026-01-28 11:45:38.735604931 +0000 UTC m=+1414.530484915 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data") pod "rabbitmq-cell1-server-0" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d") : configmap "rabbitmq-cell1-config-data" not found Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.736761 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" (UID: "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.769607 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.820147 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" (UID: "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.835113 4804 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.835137 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.835146 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.840192 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-scripts" (OuterVolumeSpecName: "scripts") pod "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" (UID: "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.840207 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-kube-api-access-j4htk" (OuterVolumeSpecName: "kube-api-access-j4htk") pod "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" (UID: "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e"). InnerVolumeSpecName "kube-api-access-j4htk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.913125 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "50c4ac86-3241-4cd1-aa15-9a36b6be1e03" (UID: "50c4ac86-3241-4cd1-aa15-9a36b6be1e03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.937276 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j4htk\" (UniqueName: \"kubernetes.io/projected/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-kube-api-access-j4htk\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.937306 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.937322 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.948415 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04ea6e04-5420-4f5b-911f-cdaede8220ab" path="/var/lib/kubelet/pods/04ea6e04-5420-4f5b-911f-cdaede8220ab/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.949517 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08795da4-549f-437a-9113-51d1003b5668" path="/var/lib/kubelet/pods/08795da4-549f-437a-9113-51d1003b5668/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.950480 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18b33b00-9642-45dc-8256-5db39ca166f1" path="/var/lib/kubelet/pods/18b33b00-9642-45dc-8256-5db39ca166f1/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.955576 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="359ecb47-f044-4273-8589-c0ceedb367b5" path="/var/lib/kubelet/pods/359ecb47-f044-4273-8589-c0ceedb367b5/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.956173 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="47a68429-2ef0-45da-8a73-62231d018738" path="/var/lib/kubelet/pods/47a68429-2ef0-45da-8a73-62231d018738/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.956713 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="518f34a2-84c4-4115-a28d-0251d0fa8064" path="/var/lib/kubelet/pods/518f34a2-84c4-4115-a28d-0251d0fa8064/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.957318 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5e2ade0c-9218-4f08-b78f-b6b6ede461f7" path="/var/lib/kubelet/pods/5e2ade0c-9218-4f08-b78f-b6b6ede461f7/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.958577 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b1029fc-e131-4d00-b538-6f0a17674c75" path="/var/lib/kubelet/pods/8b1029fc-e131-4d00-b538-6f0a17674c75/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.959194 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="903b6b99-b94d-428a-9c9c-7465ef27ad40" path="/var/lib/kubelet/pods/903b6b99-b94d-428a-9c9c-7465ef27ad40/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.959780 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc6a2a42-6519-46c6-bb24-074e5096001f" path="/var/lib/kubelet/pods/dc6a2a42-6519-46c6-bb24-074e5096001f/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.966068 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaba1c3c-49d4-498e-94b8-9c8cbe8660da" path="/var/lib/kubelet/pods/eaba1c3c-49d4-498e-94b8-9c8cbe8660da/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.966817 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f35650b1-56b4-49fb-9ecc-9aa90a1386db" path="/var/lib/kubelet/pods/f35650b1-56b4-49fb-9ecc-9aa90a1386db/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.967406 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7359aec-58b3-4254-8765-cdc131e5f912" path="/var/lib/kubelet/pods/f7359aec-58b3-4254-8765-cdc131e5f912/volumes" Jan 28 11:45:34 crc kubenswrapper[4804]: I0128 11:45:34.970661 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="76d127f1-97d9-4552-9bdb-b3482a45951d" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.047340 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.050671 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f7cab05f-efa6-4a74-920b-96f8f30f1736" (UID: "f7cab05f-efa6-4a74-920b-96f8f30f1736"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.053016 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "c6c76352-2487-4098-bbee-579834052292" (UID: "c6c76352-2487-4098-bbee-579834052292"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.074341 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-config" (OuterVolumeSpecName: "config") pod "f7cab05f-efa6-4a74-920b-96f8f30f1736" (UID: "f7cab05f-efa6-4a74-920b-96f8f30f1736"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.092661 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "50c4ac86-3241-4cd1-aa15-9a36b6be1e03" (UID: "50c4ac86-3241-4cd1-aa15-9a36b6be1e03"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.116867 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-swift-storage-0" (OuterVolumeSpecName: "dns-swift-storage-0") pod "f7cab05f-efa6-4a74-920b-96f8f30f1736" (UID: "f7cab05f-efa6-4a74-920b-96f8f30f1736"). InnerVolumeSpecName "dns-swift-storage-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.121024 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" (UID: "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.141137 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.141169 4804 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-svc\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.141178 4804 reconciler_common.go:293] "Volume detached for volume \"dns-swift-storage-0\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-dns-swift-storage-0\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.141188 4804 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.141197 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.141206 4804 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: E0128 11:45:35.141267 4804 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.141308 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: E0128 11:45:35.141362 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data podName:f7c5c969-c4c2-4f76-b3c6-152473159e78 nodeName:}" failed. No retries permitted until 2026-01-28 11:45:39.141343889 +0000 UTC m=+1414.936223873 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data") pod "rabbitmq-server-0" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78") : configmap "rabbitmq-config-data" not found Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.141708 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "f7cab05f-efa6-4a74-920b-96f8f30f1736" (UID: "f7cab05f-efa6-4a74-920b-96f8f30f1736"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.157375 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-ovsdbserver-nb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-nb-tls-certs") pod "c6c76352-2487-4098-bbee-579834052292" (UID: "c6c76352-2487-4098-bbee-579834052292"). InnerVolumeSpecName "ovsdbserver-nb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.176753 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdbserver-sb-tls-certs" (OuterVolumeSpecName: "ovsdbserver-sb-tls-certs") pod "50c4ac86-3241-4cd1-aa15-9a36b6be1e03" (UID: "50c4ac86-3241-4cd1-aa15-9a36b6be1e03"). InnerVolumeSpecName "ovsdbserver-sb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.194530 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "f7cab05f-efa6-4a74-920b-96f8f30f1736" (UID: "f7cab05f-efa6-4a74-920b-96f8f30f1736"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.243110 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/c6c76352-2487-4098-bbee-579834052292-ovsdbserver-nb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.243141 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/50c4ac86-3241-4cd1-aa15-9a36b6be1e03-ovsdbserver-sb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.243153 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.243163 4804 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/f7cab05f-efa6-4a74-920b-96f8f30f1736-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.250810 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data" (OuterVolumeSpecName: "config-data") pod "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" (UID: "5d820036-aa62-4f3a-b0b8-4dad1e7ff46e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.252717 4804 generic.go:334] "Generic (PLEG): container finished" podID="4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" containerID="53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001" exitCode=143 Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.255429 4804 generic.go:334] "Generic (PLEG): container finished" podID="be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" containerID="d91ad709364755c2f101045dffe047be8b4a0f8b3fefadd5603f62974e04e888" exitCode=1 Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.256283 4804 scope.go:117] "RemoveContainer" containerID="d91ad709364755c2f101045dffe047be8b4a0f8b3fefadd5603f62974e04e888" Jan 28 11:45:35 crc kubenswrapper[4804]: E0128 11:45:35.256819 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-jqk9s_openstack(be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8)\"" pod="openstack/root-account-create-update-jqk9s" podUID="be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.258173 4804 generic.go:334] "Generic (PLEG): container finished" podID="24549b02-2977-49ee-8f25-a6ed25e523d1" containerID="351711c020c75334855ec428e2d1987910c3ce0fc9fe965d8ca2c554f8fb0ae9" exitCode=0 Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.261056 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.264700 4804 generic.go:334] "Generic (PLEG): container finished" podID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" containerID="7b625bd5e08a3fff2579118cb1bfb0f02e6d6eda2e30f536589d6d7b53d87774" exitCode=0 Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.266020 4804 generic.go:334] "Generic (PLEG): container finished" podID="b390f543-98da-46ea-b3b9-f68c09d94c03" containerID="67b5e53f1eb1c67a490461931a62e093efa88d74afa9352d1282f6ea7d2e449a" exitCode=0 Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.269689 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5c7b6c5df9-j9ld2" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.269917 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.277250 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.336307 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f","Type":"ContainerDied","Data":"53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001"} Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.336690 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-59fb5cbd47-wwqmq"] Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.336723 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jqk9s" event={"ID":"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8","Type":"ContainerDied","Data":"d91ad709364755c2f101045dffe047be8b4a0f8b3fefadd5603f62974e04e888"} Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.336740 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"24549b02-2977-49ee-8f25-a6ed25e523d1","Type":"ContainerDied","Data":"351711c020c75334855ec428e2d1987910c3ce0fc9fe965d8ca2c554f8fb0ae9"} Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.336757 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"5d820036-aa62-4f3a-b0b8-4dad1e7ff46e","Type":"ContainerDied","Data":"4cc14b4a4b262ffd7dca6ce3a4c78be1958d2621d179512804ce0187bc8fd56e"} Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.336773 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"04cc886c-66ef-4b91-87cf-1f9fe5de8081","Type":"ContainerDied","Data":"7b625bd5e08a3fff2579118cb1bfb0f02e6d6eda2e30f536589d6d7b53d87774"} Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.336788 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b390f543-98da-46ea-b3b9-f68c09d94c03","Type":"ContainerDied","Data":"67b5e53f1eb1c67a490461931a62e093efa88d74afa9352d1282f6ea7d2e449a"} Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.336802 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"b390f543-98da-46ea-b3b9-f68c09d94c03","Type":"ContainerDied","Data":"3d6b0e8a60f6d64a7898369a58401894b066ffaf5a9e53838f90370bc8ff4841"} Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.336814 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3d6b0e8a60f6d64a7898369a58401894b066ffaf5a9e53838f90370bc8ff4841" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.336834 4804 scope.go:117] "RemoveContainer" containerID="45f239fc147b42454bdb77cdc16602cd03b54af32ff3e4a9b380a4fde2275f5c" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.343409 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" podUID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerName="proxy-httpd" containerID="cri-o://6e4df9959650dab13d28cc4f6579b5bbae4ec71560a9da89f85150e891a84a60" gracePeriod=30 Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.343853 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" podUID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerName="proxy-server" containerID="cri-o://b44257342c1561f0cc777c6fe14a814950eb25277bf5e95e9adf49e4a763d6fa" gracePeriod=30 Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.350159 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.464929 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.512020 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.516714 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.552121 4804 scope.go:117] "RemoveContainer" containerID="c1345bf2b60adbef9b806636ee3887a5869fd85c14cb9679c394104f26a95a2c" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.555561 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-j9ld2"] Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556211 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-internal-tls-certs\") pod \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556268 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8gt7\" (UniqueName: \"kubernetes.io/projected/24549b02-2977-49ee-8f25-a6ed25e523d1-kube-api-access-r8gt7\") pod \"24549b02-2977-49ee-8f25-a6ed25e523d1\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556312 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-kolla-config\") pod \"24549b02-2977-49ee-8f25-a6ed25e523d1\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556367 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04cc886c-66ef-4b91-87cf-1f9fe5de8081-etc-machine-id\") pod \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556387 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-combined-ca-bundle\") pod \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556424 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ps6sv\" (UniqueName: \"kubernetes.io/projected/b390f543-98da-46ea-b3b9-f68c09d94c03-kube-api-access-ps6sv\") pod \"b390f543-98da-46ea-b3b9-f68c09d94c03\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556453 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"24549b02-2977-49ee-8f25-a6ed25e523d1\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556474 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-operator-scripts\") pod \"24549b02-2977-49ee-8f25-a6ed25e523d1\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556538 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-config-data\") pod \"b390f543-98da-46ea-b3b9-f68c09d94c03\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556581 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-public-tls-certs\") pod \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556601 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04cc886c-66ef-4b91-87cf-1f9fe5de8081-logs\") pod \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556629 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-scripts\") pod \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556651 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data\") pod \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556666 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-vencrypt-tls-certs\") pod \"b390f543-98da-46ea-b3b9-f68c09d94c03\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556701 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data-custom\") pod \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556731 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8jjj\" (UniqueName: \"kubernetes.io/projected/04cc886c-66ef-4b91-87cf-1f9fe5de8081-kube-api-access-k8jjj\") pod \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\" (UID: \"04cc886c-66ef-4b91-87cf-1f9fe5de8081\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556751 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-default\") pod \"24549b02-2977-49ee-8f25-a6ed25e523d1\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556766 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-generated\") pod \"24549b02-2977-49ee-8f25-a6ed25e523d1\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556806 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-nova-novncproxy-tls-certs\") pod \"b390f543-98da-46ea-b3b9-f68c09d94c03\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556847 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-galera-tls-certs\") pod \"24549b02-2977-49ee-8f25-a6ed25e523d1\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556875 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-combined-ca-bundle\") pod \"b390f543-98da-46ea-b3b9-f68c09d94c03\" (UID: \"b390f543-98da-46ea-b3b9-f68c09d94c03\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.556913 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-combined-ca-bundle\") pod \"24549b02-2977-49ee-8f25-a6ed25e523d1\" (UID: \"24549b02-2977-49ee-8f25-a6ed25e523d1\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.560003 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/04cc886c-66ef-4b91-87cf-1f9fe5de8081-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "04cc886c-66ef-4b91-87cf-1f9fe5de8081" (UID: "04cc886c-66ef-4b91-87cf-1f9fe5de8081"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.561299 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "24549b02-2977-49ee-8f25-a6ed25e523d1" (UID: "24549b02-2977-49ee-8f25-a6ed25e523d1"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.562672 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "24549b02-2977-49ee-8f25-a6ed25e523d1" (UID: "24549b02-2977-49ee-8f25-a6ed25e523d1"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.572090 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24549b02-2977-49ee-8f25-a6ed25e523d1-kube-api-access-r8gt7" (OuterVolumeSpecName: "kube-api-access-r8gt7") pod "24549b02-2977-49ee-8f25-a6ed25e523d1" (UID: "24549b02-2977-49ee-8f25-a6ed25e523d1"). InnerVolumeSpecName "kube-api-access-r8gt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.572689 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "24549b02-2977-49ee-8f25-a6ed25e523d1" (UID: "24549b02-2977-49ee-8f25-a6ed25e523d1"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.573446 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/04cc886c-66ef-4b91-87cf-1f9fe5de8081-logs" (OuterVolumeSpecName: "logs") pod "04cc886c-66ef-4b91-87cf-1f9fe5de8081" (UID: "04cc886c-66ef-4b91-87cf-1f9fe5de8081"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.574065 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b390f543-98da-46ea-b3b9-f68c09d94c03-kube-api-access-ps6sv" (OuterVolumeSpecName: "kube-api-access-ps6sv") pod "b390f543-98da-46ea-b3b9-f68c09d94c03" (UID: "b390f543-98da-46ea-b3b9-f68c09d94c03"). InnerVolumeSpecName "kube-api-access-ps6sv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.576170 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "24549b02-2977-49ee-8f25-a6ed25e523d1" (UID: "24549b02-2977-49ee-8f25-a6ed25e523d1"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.586558 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "04cc886c-66ef-4b91-87cf-1f9fe5de8081" (UID: "04cc886c-66ef-4b91-87cf-1f9fe5de8081"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.606123 4804 scope.go:117] "RemoveContainer" containerID="005c93d53e10abe220c87f4440097a40fcd2ee8a29f58966418aa864a302e6f7" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.611724 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/04cc886c-66ef-4b91-87cf-1f9fe5de8081-kube-api-access-k8jjj" (OuterVolumeSpecName: "kube-api-access-k8jjj") pod "04cc886c-66ef-4b91-87cf-1f9fe5de8081" (UID: "04cc886c-66ef-4b91-87cf-1f9fe5de8081"). InnerVolumeSpecName "kube-api-access-k8jjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.611988 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-scripts" (OuterVolumeSpecName: "scripts") pod "04cc886c-66ef-4b91-87cf-1f9fe5de8081" (UID: "04cc886c-66ef-4b91-87cf-1f9fe5de8081"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.623012 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5c7b6c5df9-j9ld2"] Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.649493 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "04cc886c-66ef-4b91-87cf-1f9fe5de8081" (UID: "04cc886c-66ef-4b91-87cf-1f9fe5de8081"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.652220 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "mysql-db") pod "24549b02-2977-49ee-8f25-a6ed25e523d1" (UID: "24549b02-2977-49ee-8f25-a6ed25e523d1"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660196 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r8gt7\" (UniqueName: \"kubernetes.io/projected/24549b02-2977-49ee-8f25-a6ed25e523d1-kube-api-access-r8gt7\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660226 4804 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660235 4804 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/04cc886c-66ef-4b91-87cf-1f9fe5de8081-etc-machine-id\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660244 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660253 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ps6sv\" (UniqueName: \"kubernetes.io/projected/b390f543-98da-46ea-b3b9-f68c09d94c03-kube-api-access-ps6sv\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660284 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660293 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660302 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/04cc886c-66ef-4b91-87cf-1f9fe5de8081-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660310 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660317 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660326 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8jjj\" (UniqueName: \"kubernetes.io/projected/04cc886c-66ef-4b91-87cf-1f9fe5de8081-kube-api-access-k8jjj\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660335 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.660345 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/24549b02-2977-49ee-8f25-a6ed25e523d1-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.751338 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b390f543-98da-46ea-b3b9-f68c09d94c03" (UID: "b390f543-98da-46ea-b3b9-f68c09d94c03"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.751692 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="f7c5c969-c4c2-4f76-b3c6-152473159e78" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.754096 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "24549b02-2977-49ee-8f25-a6ed25e523d1" (UID: "24549b02-2977-49ee-8f25-a6ed25e523d1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.763083 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.763113 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.785850 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.802514 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.810695 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.819029 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.819071 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-sb-0"] Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.846520 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "04cc886c-66ef-4b91-87cf-1f9fe5de8081" (UID: "04cc886c-66ef-4b91-87cf-1f9fe5de8081"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.854232 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-nova-novncproxy-tls-certs" (OuterVolumeSpecName: "nova-novncproxy-tls-certs") pod "b390f543-98da-46ea-b3b9-f68c09d94c03" (UID: "b390f543-98da-46ea-b3b9-f68c09d94c03"). InnerVolumeSpecName "nova-novncproxy-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.865110 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0c6f-account-create-update-hhm9c" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.865452 4804 reconciler_common.go:293] "Volume detached for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-nova-novncproxy-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.865476 4804 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.865492 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.868551 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data" (OuterVolumeSpecName: "config-data") pod "04cc886c-66ef-4b91-87cf-1f9fe5de8081" (UID: "04cc886c-66ef-4b91-87cf-1f9fe5de8081"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.899040 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-config-data" (OuterVolumeSpecName: "config-data") pod "b390f543-98da-46ea-b3b9-f68c09d94c03" (UID: "b390f543-98da-46ea-b3b9-f68c09d94c03"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.921462 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-vencrypt-tls-certs" (OuterVolumeSpecName: "vencrypt-tls-certs") pod "b390f543-98da-46ea-b3b9-f68c09d94c03" (UID: "b390f543-98da-46ea-b3b9-f68c09d94c03"). InnerVolumeSpecName "vencrypt-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.922688 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.934060 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-659f7cffd6-wm9cj" podUID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerName="placement-api" probeResult="failure" output="Get \"https://10.217.0.157:8778/\": read tcp 10.217.0.2:45092->10.217.0.157:8778: read: connection reset by peer" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.934451 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/placement-659f7cffd6-wm9cj" podUID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerName="placement-log" probeResult="failure" output="Get \"https://10.217.0.157:8778/\": read tcp 10.217.0.2:45094->10.217.0.157:8778: read: connection reset by peer" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.935254 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovsdbserver-nb-0"] Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.955535 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "04cc886c-66ef-4b91-87cf-1f9fe5de8081" (UID: "04cc886c-66ef-4b91-87cf-1f9fe5de8081"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.960099 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8522-account-create-update-8fq2p" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.966035 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-operator-scripts\") pod \"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d\" (UID: \"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.966216 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kqkz\" (UniqueName: \"kubernetes.io/projected/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-kube-api-access-2kqkz\") pod \"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d\" (UID: \"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d\") " Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.966667 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.966685 4804 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.966695 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/04cc886c-66ef-4b91-87cf-1f9fe5de8081-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.966704 4804 reconciler_common.go:293] "Volume detached for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/b390f543-98da-46ea-b3b9-f68c09d94c03-vencrypt-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.967976 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8933c7a4-1e24-4de2-b302-1be9bc3c1e2d" (UID: "8933c7a4-1e24-4de2-b302-1be9bc3c1e2d"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.971686 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "24549b02-2977-49ee-8f25-a6ed25e523d1" (UID: "24549b02-2977-49ee-8f25-a6ed25e523d1"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:35 crc kubenswrapper[4804]: I0128 11:45:35.973713 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-kube-api-access-2kqkz" (OuterVolumeSpecName: "kube-api-access-2kqkz") pod "8933c7a4-1e24-4de2-b302-1be9bc3c1e2d" (UID: "8933c7a4-1e24-4de2-b302-1be9bc3c1e2d"). InnerVolumeSpecName "kube-api-access-2kqkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.067688 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12271c96-a234-46d8-bc32-80db78339116-operator-scripts\") pod \"12271c96-a234-46d8-bc32-80db78339116\" (UID: \"12271c96-a234-46d8-bc32-80db78339116\") " Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.067748 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-69bqc\" (UniqueName: \"kubernetes.io/projected/12271c96-a234-46d8-bc32-80db78339116-kube-api-access-69bqc\") pod \"12271c96-a234-46d8-bc32-80db78339116\" (UID: \"12271c96-a234-46d8-bc32-80db78339116\") " Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.068212 4804 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/24549b02-2977-49ee-8f25-a6ed25e523d1-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.068235 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kqkz\" (UniqueName: \"kubernetes.io/projected/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-kube-api-access-2kqkz\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.068247 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.069075 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/12271c96-a234-46d8-bc32-80db78339116-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "12271c96-a234-46d8-bc32-80db78339116" (UID: "12271c96-a234-46d8-bc32-80db78339116"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.076150 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12271c96-a234-46d8-bc32-80db78339116-kube-api-access-69bqc" (OuterVolumeSpecName: "kube-api-access-69bqc") pod "12271c96-a234-46d8-bc32-80db78339116" (UID: "12271c96-a234-46d8-bc32-80db78339116"). InnerVolumeSpecName "kube-api-access-69bqc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.171314 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/12271c96-a234-46d8-bc32-80db78339116-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.171359 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-69bqc\" (UniqueName: \"kubernetes.io/projected/12271c96-a234-46d8-bc32-80db78339116-kube-api-access-69bqc\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.329601 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"04cc886c-66ef-4b91-87cf-1f9fe5de8081","Type":"ContainerDied","Data":"6b06f838e59a73b485a69b93f766b0fb460afb06549c4aa004f7bac68fc724cc"} Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.329662 4804 scope.go:117] "RemoveContainer" containerID="7b625bd5e08a3fff2579118cb1bfb0f02e6d6eda2e30f536589d6d7b53d87774" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.329833 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.351401 4804 generic.go:334] "Generic (PLEG): container finished" podID="8cb48af9-edd2-404a-9d56-afedbfa79f07" containerID="fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441" exitCode=0 Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.351508 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8cb48af9-edd2-404a-9d56-afedbfa79f07","Type":"ContainerDied","Data":"fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441"} Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.377179 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-8522-account-create-update-8fq2p" event={"ID":"12271c96-a234-46d8-bc32-80db78339116","Type":"ContainerDied","Data":"892941696eeaa5b4b2629f40cb73dfe018b72b2636b79c4b93c9f6f01ce6184e"} Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.377567 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-8522-account-create-update-8fq2p" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.414138 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0c6f-account-create-update-hhm9c" event={"ID":"8933c7a4-1e24-4de2-b302-1be9bc3c1e2d","Type":"ContainerDied","Data":"314f7cccb05be770227b06402be77c91999b3a0c06e5100b791025a241c569ff"} Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.414232 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0c6f-account-create-update-hhm9c" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.445292 4804 generic.go:334] "Generic (PLEG): container finished" podID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerID="2bc2f4bef5b6e11721d8eabaa519e6625f7ff953fd015c6be0cebef1e6ec65fa" exitCode=0 Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.445621 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659f7cffd6-wm9cj" event={"ID":"280cd1a0-6761-425c-8de1-bec2307ba0c0","Type":"ContainerDied","Data":"2bc2f4bef5b6e11721d8eabaa519e6625f7ff953fd015c6be0cebef1e6ec65fa"} Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.460595 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.471115 4804 scope.go:117] "RemoveContainer" containerID="d91ad709364755c2f101045dffe047be8b4a0f8b3fefadd5603f62974e04e888" Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.471785 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mariadb-account-create-update\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mariadb-account-create-update pod=root-account-create-update-jqk9s_openstack(be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8)\"" pod="openstack/root-account-create-update-jqk9s" podUID="be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.473147 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.523701 4804 scope.go:117] "RemoveContainer" containerID="b7a5a299ea638aff1b67f737be31752dbc58e62ca2663d443117c669ac5e859a" Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.527968 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441 is running failed: container process not found" containerID="fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.531406 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441 is running failed: container process not found" containerID="fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.535604 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441 is running failed: container process not found" containerID="fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.535656 4804 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441 is running failed: container process not found" probeType="Readiness" pod="openstack/nova-cell0-conductor-0" podUID="8cb48af9-edd2-404a-9d56-afedbfa79f07" containerName="nova-cell0-conductor-conductor" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.544361 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-8522-account-create-update-8fq2p"] Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.559153 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"24549b02-2977-49ee-8f25-a6ed25e523d1","Type":"ContainerDied","Data":"b7ae3c5cd3fc37a1e5fff03dc9d1c7b30b19148db61f795f2d045d947ed549b4"} Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.559264 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.580826 4804 generic.go:334] "Generic (PLEG): container finished" podID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerID="b44257342c1561f0cc777c6fe14a814950eb25277bf5e95e9adf49e4a763d6fa" exitCode=0 Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.581128 4804 generic.go:334] "Generic (PLEG): container finished" podID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerID="6e4df9959650dab13d28cc4f6579b5bbae4ec71560a9da89f85150e891a84a60" exitCode=0 Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.581346 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" event={"ID":"fcc0e969-75e0-4441-a805-7845261f1ad5","Type":"ContainerDied","Data":"b44257342c1561f0cc777c6fe14a814950eb25277bf5e95e9adf49e4a763d6fa"} Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.581406 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" event={"ID":"fcc0e969-75e0-4441-a805-7845261f1ad5","Type":"ContainerDied","Data":"6e4df9959650dab13d28cc4f6579b5bbae4ec71560a9da89f85150e891a84a60"} Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.581588 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.638414 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-8522-account-create-update-8fq2p"] Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.659615 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f6db044032b9ea275036a4c598039837713d6af1c8b750e39682cd377aa7e00" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.675278 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f6db044032b9ea275036a4c598039837713d6af1c8b750e39682cd377aa7e00" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.675418 4804 scope.go:117] "RemoveContainer" containerID="351711c020c75334855ec428e2d1987910c3ce0fc9fe965d8ca2c554f8fb0ae9" Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.683341 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="1f6db044032b9ea275036a4c598039837713d6af1c8b750e39682cd377aa7e00" cmd=["/usr/local/bin/container-scripts/status_check.sh"] Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.683764 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-northd-0" podUID="edcdd787-6628-49ee-abcf-0146c096f547" containerName="ovn-northd" Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.684793 4804 secret.go:188] Couldn't get secret openstack/glance-scripts: secret "glance-scripts" not found Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.684901 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts podName:4f5cdaa9-8b1d-44b2-bfe6-d986f680327f nodeName:}" failed. No retries permitted until 2026-01-28 11:45:40.684837336 +0000 UTC m=+1416.479717320 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "scripts" (UniqueName: "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts") pod "glance-default-internal-api-0" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f") : secret "glance-scripts" not found Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.684947 4804 secret.go:188] Couldn't get secret openstack/glance-default-internal-config-data: secret "glance-default-internal-config-data" not found Jan 28 11:45:36 crc kubenswrapper[4804]: E0128 11:45:36.685102 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data podName:4f5cdaa9-8b1d-44b2-bfe6-d986f680327f nodeName:}" failed. No retries permitted until 2026-01-28 11:45:40.685090073 +0000 UTC m=+1416.479970087 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data") pod "glance-default-internal-api-0" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f") : secret "glance-default-internal-config-data" not found Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.698080 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0c6f-account-create-update-hhm9c"] Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.698672 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" podUID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerName="proxy-httpd" probeResult="failure" output="Get \"https://10.217.0.171:8080/healthcheck\": dial tcp 10.217.0.171:8080: connect: connection refused" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.698750 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" podUID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerName="proxy-server" probeResult="failure" output="Get \"https://10.217.0.171:8080/healthcheck\": dial tcp 10.217.0.171:8080: connect: connection refused" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.703389 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0c6f-account-create-update-hhm9c"] Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.709311 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": read tcp 10.217.0.2:48414->10.217.0.207:8775: read: connection reset by peer" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.709454 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/nova-metadata-0" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.207:8775/\": read tcp 10.217.0.2:48426->10.217.0.207:8775: read: connection reset by peer" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.712162 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.745161 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.757351 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" podUID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerName="barbican-api-log" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:52218->10.217.0.162:9311: read: connection reset by peer" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.757488 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" podUID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerName="barbican-api" probeResult="failure" output="Get \"https://10.217.0.162:9311/healthcheck\": read tcp 10.217.0.2:52234->10.217.0.162:9311: read: connection reset by peer" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.771143 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.777158 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-cell1-galera-0"] Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.939054 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" path="/var/lib/kubelet/pods/04cc886c-66ef-4b91-87cf-1f9fe5de8081/volumes" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.939779 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12271c96-a234-46d8-bc32-80db78339116" path="/var/lib/kubelet/pods/12271c96-a234-46d8-bc32-80db78339116/volumes" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.940439 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24549b02-2977-49ee-8f25-a6ed25e523d1" path="/var/lib/kubelet/pods/24549b02-2977-49ee-8f25-a6ed25e523d1/volumes" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.955181 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" path="/var/lib/kubelet/pods/50c4ac86-3241-4cd1-aa15-9a36b6be1e03/volumes" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.956174 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" path="/var/lib/kubelet/pods/5d820036-aa62-4f3a-b0b8-4dad1e7ff46e/volumes" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.957121 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8933c7a4-1e24-4de2-b302-1be9bc3c1e2d" path="/var/lib/kubelet/pods/8933c7a4-1e24-4de2-b302-1be9bc3c1e2d/volumes" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.958231 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b390f543-98da-46ea-b3b9-f68c09d94c03" path="/var/lib/kubelet/pods/b390f543-98da-46ea-b3b9-f68c09d94c03/volumes" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.959029 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6c76352-2487-4098-bbee-579834052292" path="/var/lib/kubelet/pods/c6c76352-2487-4098-bbee-579834052292/volumes" Jan 28 11:45:36 crc kubenswrapper[4804]: I0128 11:45:36.959666 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7cab05f-efa6-4a74-920b-96f8f30f1736" path="/var/lib/kubelet/pods/f7cab05f-efa6-4a74-920b-96f8f30f1736/volumes" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.093208 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.094747 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.106416 4804 scope.go:117] "RemoveContainer" containerID="71511ac2cacaf27ae221597c51e8a13319dc222d2cd450901bd6db686f0e4b92" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.195304 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-combined-ca-bundle\") pod \"280cd1a0-6761-425c-8de1-bec2307ba0c0\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.196628 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-config-data\") pod \"280cd1a0-6761-425c-8de1-bec2307ba0c0\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.196682 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/280cd1a0-6761-425c-8de1-bec2307ba0c0-logs\") pod \"280cd1a0-6761-425c-8de1-bec2307ba0c0\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.196721 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-internal-tls-certs\") pod \"280cd1a0-6761-425c-8de1-bec2307ba0c0\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.196754 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-public-tls-certs\") pod \"280cd1a0-6761-425c-8de1-bec2307ba0c0\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.196776 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-scripts\") pod \"280cd1a0-6761-425c-8de1-bec2307ba0c0\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.196794 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t74n8\" (UniqueName: \"kubernetes.io/projected/8cb48af9-edd2-404a-9d56-afedbfa79f07-kube-api-access-t74n8\") pod \"8cb48af9-edd2-404a-9d56-afedbfa79f07\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.197046 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ffjll\" (UniqueName: \"kubernetes.io/projected/280cd1a0-6761-425c-8de1-bec2307ba0c0-kube-api-access-ffjll\") pod \"280cd1a0-6761-425c-8de1-bec2307ba0c0\" (UID: \"280cd1a0-6761-425c-8de1-bec2307ba0c0\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.197368 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-combined-ca-bundle\") pod \"8cb48af9-edd2-404a-9d56-afedbfa79f07\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.197467 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-config-data\") pod \"8cb48af9-edd2-404a-9d56-afedbfa79f07\" (UID: \"8cb48af9-edd2-404a-9d56-afedbfa79f07\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.201519 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/280cd1a0-6761-425c-8de1-bec2307ba0c0-logs" (OuterVolumeSpecName: "logs") pod "280cd1a0-6761-425c-8de1-bec2307ba0c0" (UID: "280cd1a0-6761-425c-8de1-bec2307ba0c0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.236136 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/280cd1a0-6761-425c-8de1-bec2307ba0c0-kube-api-access-ffjll" (OuterVolumeSpecName: "kube-api-access-ffjll") pod "280cd1a0-6761-425c-8de1-bec2307ba0c0" (UID: "280cd1a0-6761-425c-8de1-bec2307ba0c0"). InnerVolumeSpecName "kube-api-access-ffjll". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.237602 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-scripts" (OuterVolumeSpecName: "scripts") pod "280cd1a0-6761-425c-8de1-bec2307ba0c0" (UID: "280cd1a0-6761-425c-8de1-bec2307ba0c0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.249480 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cb48af9-edd2-404a-9d56-afedbfa79f07-kube-api-access-t74n8" (OuterVolumeSpecName: "kube-api-access-t74n8") pod "8cb48af9-edd2-404a-9d56-afedbfa79f07" (UID: "8cb48af9-edd2-404a-9d56-afedbfa79f07"). InnerVolumeSpecName "kube-api-access-t74n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.271093 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8cb48af9-edd2-404a-9d56-afedbfa79f07" (UID: "8cb48af9-edd2-404a-9d56-afedbfa79f07"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.297114 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-config-data" (OuterVolumeSpecName: "config-data") pod "8cb48af9-edd2-404a-9d56-afedbfa79f07" (UID: "8cb48af9-edd2-404a-9d56-afedbfa79f07"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.301244 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/280cd1a0-6761-425c-8de1-bec2307ba0c0-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.301370 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.301426 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t74n8\" (UniqueName: \"kubernetes.io/projected/8cb48af9-edd2-404a-9d56-afedbfa79f07-kube-api-access-t74n8\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.301481 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ffjll\" (UniqueName: \"kubernetes.io/projected/280cd1a0-6761-425c-8de1-bec2307ba0c0-kube-api-access-ffjll\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.301533 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.301606 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8cb48af9-edd2-404a-9d56-afedbfa79f07-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.318495 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "280cd1a0-6761-425c-8de1-bec2307ba0c0" (UID: "280cd1a0-6761-425c-8de1-bec2307ba0c0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.354017 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.386212 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-config-data" (OuterVolumeSpecName: "config-data") pod "280cd1a0-6761-425c-8de1-bec2307ba0c0" (UID: "280cd1a0-6761-425c-8de1-bec2307ba0c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.416526 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.416556 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.425229 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "280cd1a0-6761-425c-8de1-bec2307ba0c0" (UID: "280cd1a0-6761-425c-8de1-bec2307ba0c0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.468778 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.475396 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.475747 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="ceilometer-central-agent" containerID="cri-o://e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4" gracePeriod=30 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.475859 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="sg-core" containerID="cri-o://4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4" gracePeriod=30 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.475872 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="proxy-httpd" containerID="cri-o://1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c" gracePeriod=30 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.475893 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="ceilometer-notification-agent" containerID="cri-o://e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342" gracePeriod=30 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.493903 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.517222 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-log-httpd\") pod \"fcc0e969-75e0-4441-a805-7845261f1ad5\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.517294 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-combined-ca-bundle\") pod \"fcc0e969-75e0-4441-a805-7845261f1ad5\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.517359 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-etc-swift\") pod \"fcc0e969-75e0-4441-a805-7845261f1ad5\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.517384 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-public-tls-certs\") pod \"fcc0e969-75e0-4441-a805-7845261f1ad5\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.517509 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-config-data\") pod \"fcc0e969-75e0-4441-a805-7845261f1ad5\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.518819 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6vwc\" (UniqueName: \"kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-kube-api-access-z6vwc\") pod \"fcc0e969-75e0-4441-a805-7845261f1ad5\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.518916 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-internal-tls-certs\") pod \"fcc0e969-75e0-4441-a805-7845261f1ad5\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.518973 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-run-httpd\") pod \"fcc0e969-75e0-4441-a805-7845261f1ad5\" (UID: \"fcc0e969-75e0-4441-a805-7845261f1ad5\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.519619 4804 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.524063 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "fcc0e969-75e0-4441-a805-7845261f1ad5" (UID: "fcc0e969-75e0-4441-a805-7845261f1ad5"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.526991 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "280cd1a0-6761-425c-8de1-bec2307ba0c0" (UID: "280cd1a0-6761-425c-8de1-bec2307ba0c0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.527503 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "fcc0e969-75e0-4441-a805-7845261f1ad5" (UID: "fcc0e969-75e0-4441-a805-7845261f1ad5"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.528322 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-kube-api-access-z6vwc" (OuterVolumeSpecName: "kube-api-access-z6vwc") pod "fcc0e969-75e0-4441-a805-7845261f1ad5" (UID: "fcc0e969-75e0-4441-a805-7845261f1ad5"). InnerVolumeSpecName "kube-api-access-z6vwc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.528972 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "fcc0e969-75e0-4441-a805-7845261f1ad5" (UID: "fcc0e969-75e0-4441-a805-7845261f1ad5"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.579246 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.579471 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="6af777f5-5dfc-4f4d-b7c5-dd0de3f80def" containerName="kube-state-metrics" containerID="cri-o://ccddc2c43c4ec70519371e0d1f04a70d45a5a33973b05eef166b2e2189b30710" gracePeriod=30 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620243 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-nova-metadata-tls-certs\") pod \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620301 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-combined-ca-bundle\") pod \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620409 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-config-data\") pod \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620441 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67wq4\" (UniqueName: \"kubernetes.io/projected/b0bfaf6b-2c74-4812-965a-4db80f0c4527-kube-api-access-67wq4\") pod \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620466 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-combined-ca-bundle\") pod \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620487 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-public-tls-certs\") pod \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620528 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data\") pod \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620547 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkk99\" (UniqueName: \"kubernetes.io/projected/bb3c1e4d-637e-4de6-aa37-7daff5298b30-kube-api-access-qkk99\") pod \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620635 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb3c1e4d-637e-4de6-aa37-7daff5298b30-logs\") pod \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620661 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data-custom\") pod \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620681 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-internal-tls-certs\") pod \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\" (UID: \"bb3c1e4d-637e-4de6-aa37-7daff5298b30\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.620700 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0bfaf6b-2c74-4812-965a-4db80f0c4527-logs\") pod \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\" (UID: \"b0bfaf6b-2c74-4812-965a-4db80f0c4527\") " Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.621226 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z6vwc\" (UniqueName: \"kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-kube-api-access-z6vwc\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.621244 4804 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.621254 4804 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/fcc0e969-75e0-4441-a805-7845261f1ad5-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.621265 4804 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/fcc0e969-75e0-4441-a805-7845261f1ad5-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.621276 4804 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/280cd1a0-6761-425c-8de1-bec2307ba0c0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.626257 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b0bfaf6b-2c74-4812-965a-4db80f0c4527-logs" (OuterVolumeSpecName: "logs") pod "b0bfaf6b-2c74-4812-965a-4db80f0c4527" (UID: "b0bfaf6b-2c74-4812-965a-4db80f0c4527"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.640395 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bb3c1e4d-637e-4de6-aa37-7daff5298b30-logs" (OuterVolumeSpecName: "logs") pod "bb3c1e4d-637e-4de6-aa37-7daff5298b30" (UID: "bb3c1e4d-637e-4de6-aa37-7daff5298b30"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.712633 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0bfaf6b-2c74-4812-965a-4db80f0c4527-kube-api-access-67wq4" (OuterVolumeSpecName: "kube-api-access-67wq4") pod "b0bfaf6b-2c74-4812-965a-4db80f0c4527" (UID: "b0bfaf6b-2c74-4812-965a-4db80f0c4527"). InnerVolumeSpecName "kube-api-access-67wq4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.713311 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "bb3c1e4d-637e-4de6-aa37-7daff5298b30" (UID: "bb3c1e4d-637e-4de6-aa37-7daff5298b30"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.774682 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" event={"ID":"fcc0e969-75e0-4441-a805-7845261f1ad5","Type":"ContainerDied","Data":"6d2eca1ee21c2e58f6c5ebc2fd659f0e3b36f17ff8d88938be99b51b5573272e"} Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.774741 4804 scope.go:117] "RemoveContainer" containerID="b44257342c1561f0cc777c6fe14a814950eb25277bf5e95e9adf49e4a763d6fa" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.775071 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-proxy-59fb5cbd47-wwqmq" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.787322 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67wq4\" (UniqueName: \"kubernetes.io/projected/b0bfaf6b-2c74-4812-965a-4db80f0c4527-kube-api-access-67wq4\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.787360 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/bb3c1e4d-637e-4de6-aa37-7daff5298b30-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.787381 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.787391 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b0bfaf6b-2c74-4812-965a-4db80f0c4527-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.798357 4804 generic.go:334] "Generic (PLEG): container finished" podID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerID="5fce3701f770e3f1d822ae3950ad420da6c9b44d0df68cf4bd4c8ebf86d62649" exitCode=0 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.798468 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae0fb199-797a-40c6-8c71-3b5a976b6c61","Type":"ContainerDied","Data":"5fce3701f770e3f1d822ae3950ad420da6c9b44d0df68cf4bd4c8ebf86d62649"} Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.814971 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-659f7cffd6-wm9cj" event={"ID":"280cd1a0-6761-425c-8de1-bec2307ba0c0","Type":"ContainerDied","Data":"326e140f9daa666bf3c0b563922935205ab7fc5dba38cc45fd96d0a13dcbd798"} Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.815115 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-659f7cffd6-wm9cj" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.821512 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"8cb48af9-edd2-404a-9d56-afedbfa79f07","Type":"ContainerDied","Data":"c41ec5eb61e29312ebbde6dd9b201b0e68fdaaa8fb1724740ba107ac19157740"} Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.821663 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.842476 4804 generic.go:334] "Generic (PLEG): container finished" podID="5198da96-d6b6-4b80-bb93-838dff10730e" containerID="ea5cc70522b8b244db30a0a1dd5bc4353ad8899e579dd2e9b1384915ac35e91e" exitCode=0 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.842714 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5198da96-d6b6-4b80-bb93-838dff10730e","Type":"ContainerDied","Data":"ea5cc70522b8b244db30a0a1dd5bc4353ad8899e579dd2e9b1384915ac35e91e"} Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.847144 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.854651 4804 generic.go:334] "Generic (PLEG): container finished" podID="878daeff-34bf-4dab-8118-e42c318849bb" containerID="1144f29504fe6195fc342eb320a4b830871f3e9d0216c4ce9fc167121dce473e" exitCode=0 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.854734 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8f675b957-rm9qp" event={"ID":"878daeff-34bf-4dab-8118-e42c318849bb","Type":"ContainerDied","Data":"1144f29504fe6195fc342eb320a4b830871f3e9d0216c4ce9fc167121dce473e"} Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.861600 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bb3c1e4d-637e-4de6-aa37-7daff5298b30-kube-api-access-qkk99" (OuterVolumeSpecName: "kube-api-access-qkk99") pod "bb3c1e4d-637e-4de6-aa37-7daff5298b30" (UID: "bb3c1e4d-637e-4de6-aa37-7daff5298b30"). InnerVolumeSpecName "kube-api-access-qkk99". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.867047 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-f8f4-account-create-update-mg2gd"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.868150 4804 generic.go:334] "Generic (PLEG): container finished" podID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerID="8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e" exitCode=0 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.868232 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" event={"ID":"bb3c1e4d-637e-4de6-aa37-7daff5298b30","Type":"ContainerDied","Data":"8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e"} Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.868261 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" event={"ID":"bb3c1e4d-637e-4de6-aa37-7daff5298b30","Type":"ContainerDied","Data":"545d9d7c89cb4fb5f1b3a7bdef9c710109a6c3aca89e779fe23e0a1c510a7627"} Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.868347 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-7bd5b5bf44-5z4wx" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.873292 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-f8f4-account-create-update-mg2gd"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.879862 4804 generic.go:334] "Generic (PLEG): container finished" podID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerID="f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca" exitCode=0 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.880083 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0bfaf6b-2c74-4812-965a-4db80f0c4527","Type":"ContainerDied","Data":"f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca"} Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.880155 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"b0bfaf6b-2c74-4812-965a-4db80f0c4527","Type":"ContainerDied","Data":"dde0b061f1847f788c0ad04e0fb5557d71997e0c0bf63a89d091b0cead6b787e"} Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.880470 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/memcached-0" podUID="d47089ce-8b52-4bd3-a30e-04736fed01fc" containerName="memcached" containerID="cri-o://386c42bab4089fa2791b36fa5e66b769af00f0ef8e73fa961d1e6e6f38f01759" gracePeriod=30 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.880934 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.889267 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-f8f4-account-create-update-xlhb4"] Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.889990 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" containerName="cinder-api" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890014 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" containerName="cinder-api" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890030 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" containerName="probe" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890036 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" containerName="probe" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890064 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerName="barbican-api" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890071 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerName="barbican-api" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890085 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b390f543-98da-46ea-b3b9-f68c09d94c03" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890091 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b390f543-98da-46ea-b3b9-f68c09d94c03" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890105 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerName="placement-log" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890111 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerName="placement-log" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890114 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkk99\" (UniqueName: \"kubernetes.io/projected/bb3c1e4d-637e-4de6-aa37-7daff5298b30-kube-api-access-qkk99\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890127 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" containerName="cinder-api-log" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890135 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" containerName="cinder-api-log" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890144 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7cab05f-efa6-4a74-920b-96f8f30f1736" containerName="init" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890151 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7cab05f-efa6-4a74-920b-96f8f30f1736" containerName="init" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890182 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerName="proxy-server" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890188 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerName="proxy-server" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890205 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c76352-2487-4098-bbee-579834052292" containerName="ovsdbserver-nb" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890212 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c76352-2487-4098-bbee-579834052292" containerName="ovsdbserver-nb" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890229 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" containerName="openstack-network-exporter" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890235 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" containerName="openstack-network-exporter" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890256 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" containerName="ovsdbserver-sb" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890262 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" containerName="ovsdbserver-sb" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890278 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerName="barbican-api-log" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890284 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerName="barbican-api-log" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890302 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerName="proxy-httpd" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890308 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerName="proxy-httpd" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890321 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-metadata" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890328 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-metadata" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890353 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24549b02-2977-49ee-8f25-a6ed25e523d1" containerName="galera" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890363 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="24549b02-2977-49ee-8f25-a6ed25e523d1" containerName="galera" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890382 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" containerName="cinder-scheduler" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890388 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" containerName="cinder-scheduler" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890407 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-log" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890413 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-log" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890425 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerName="placement-api" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890431 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerName="placement-api" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890443 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24549b02-2977-49ee-8f25-a6ed25e523d1" containerName="mysql-bootstrap" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890449 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="24549b02-2977-49ee-8f25-a6ed25e523d1" containerName="mysql-bootstrap" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890456 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7359aec-58b3-4254-8765-cdc131e5f912" containerName="openstack-network-exporter" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890464 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7359aec-58b3-4254-8765-cdc131e5f912" containerName="openstack-network-exporter" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890477 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8cb48af9-edd2-404a-9d56-afedbfa79f07" containerName="nova-cell0-conductor-conductor" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890483 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="8cb48af9-edd2-404a-9d56-afedbfa79f07" containerName="nova-cell0-conductor-conductor" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890493 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6c76352-2487-4098-bbee-579834052292" containerName="openstack-network-exporter" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890512 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6c76352-2487-4098-bbee-579834052292" containerName="openstack-network-exporter" Jan 28 11:45:37 crc kubenswrapper[4804]: E0128 11:45:37.890522 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7cab05f-efa6-4a74-920b-96f8f30f1736" containerName="dnsmasq-dns" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890528 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7cab05f-efa6-4a74-920b-96f8f30f1736" containerName="dnsmasq-dns" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890818 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" containerName="openstack-network-exporter" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890833 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" containerName="cinder-api" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890840 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerName="placement-log" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890855 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-metadata" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890862 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7359aec-58b3-4254-8765-cdc131e5f912" containerName="openstack-network-exporter" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890874 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="8cb48af9-edd2-404a-9d56-afedbfa79f07" containerName="nova-cell0-conductor-conductor" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890903 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7cab05f-efa6-4a74-920b-96f8f30f1736" containerName="dnsmasq-dns" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890914 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerName="barbican-api" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890928 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c76352-2487-4098-bbee-579834052292" containerName="ovsdbserver-nb" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890945 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" containerName="cinder-scheduler" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890956 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="280cd1a0-6761-425c-8de1-bec2307ba0c0" containerName="placement-api" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890963 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="b390f543-98da-46ea-b3b9-f68c09d94c03" containerName="nova-cell1-novncproxy-novncproxy" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.890976 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d820036-aa62-4f3a-b0b8-4dad1e7ff46e" containerName="probe" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.891020 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerName="proxy-httpd" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.891037 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="50c4ac86-3241-4cd1-aa15-9a36b6be1e03" containerName="ovsdbserver-sb" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.891047 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" containerName="barbican-api-log" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.891058 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" containerName="nova-metadata-log" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.891073 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="24549b02-2977-49ee-8f25-a6ed25e523d1" containerName="galera" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.891083 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="04cc886c-66ef-4b91-87cf-1f9fe5de8081" containerName="cinder-api-log" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.891096 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="fcc0e969-75e0-4441-a805-7845261f1ad5" containerName="proxy-server" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.891110 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6c76352-2487-4098-bbee-579834052292" containerName="openstack-network-exporter" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.898036 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.901132 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.912940 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-f8f4-account-create-update-xlhb4"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.930456 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-qmm7h"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.935216 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-5r69w"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.945484 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-5r69w"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.951566 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-qmm7h"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.957922 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6f885d959c-vhjh4"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.958193 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/keystone-6f885d959c-vhjh4" podUID="4efe85dc-b64c-4cbe-83f7-89fa462a95a0" containerName="keystone-api" containerID="cri-o://31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3" gracePeriod=30 Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.964730 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.970060 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-5t7jn"] Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.991258 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv7qb\" (UniqueName: \"kubernetes.io/projected/e8eac10f-27a6-4229-9281-ead753bf852d-kube-api-access-fv7qb\") pod \"keystone-f8f4-account-create-update-xlhb4\" (UID: \"e8eac10f-27a6-4229-9281-ead753bf852d\") " pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.991399 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8eac10f-27a6-4229-9281-ead753bf852d-operator-scripts\") pod \"keystone-f8f4-account-create-update-xlhb4\" (UID: \"e8eac10f-27a6-4229-9281-ead753bf852d\") " pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:37 crc kubenswrapper[4804]: I0128 11:45:37.993066 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-5t7jn"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.001373 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-f8f4-account-create-update-xlhb4"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.005142 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0bfaf6b-2c74-4812-965a-4db80f0c4527" (UID: "b0bfaf6b-2c74-4812-965a-4db80f0c4527"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.012309 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fcc0e969-75e0-4441-a805-7845261f1ad5" (UID: "fcc0e969-75e0-4441-a805-7845261f1ad5"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.024814 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-jqk9s"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.032949 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-config-data" (OuterVolumeSpecName: "config-data") pod "b0bfaf6b-2c74-4812-965a-4db80f0c4527" (UID: "b0bfaf6b-2c74-4812-965a-4db80f0c4527"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.077369 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.077653 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fcc0e969-75e0-4441-a805-7845261f1ad5" (UID: "fcc0e969-75e0-4441-a805-7845261f1ad5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.077696 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.087451 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-config-data" (OuterVolumeSpecName: "config-data") pod "fcc0e969-75e0-4441-a805-7845261f1ad5" (UID: "fcc0e969-75e0-4441-a805-7845261f1ad5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.087447 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.087493 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.089676 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6af777f5_5dfc_4f4d_b7c5_dd0de3f80def.slice/crio-ccddc2c43c4ec70519371e0d1f04a70d45a5a33973b05eef166b2e2189b30710.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod90f5a2ef_6224_4af8_8bba_32c689a960f1.slice/crio-conmon-4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4f5cdaa9_8b1d_44b2_bfe6_d986f680327f.slice/crio-9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63.scope\": RecentStats: unable to find data in memory cache]" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.089806 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.089855 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovs-vswitchd" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.090209 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.090240 4804 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.099658 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv7qb\" (UniqueName: \"kubernetes.io/projected/e8eac10f-27a6-4229-9281-ead753bf852d-kube-api-access-fv7qb\") pod \"keystone-f8f4-account-create-update-xlhb4\" (UID: \"e8eac10f-27a6-4229-9281-ead753bf852d\") " pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.105754 4804 projected.go:194] Error preparing data for projected volume kube-api-access-fv7qb for pod openstack/keystone-f8f4-account-create-update-xlhb4: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.105843 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e8eac10f-27a6-4229-9281-ead753bf852d-kube-api-access-fv7qb podName:e8eac10f-27a6-4229-9281-ead753bf852d nodeName:}" failed. No retries permitted until 2026-01-28 11:45:38.605820022 +0000 UTC m=+1414.400700006 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fv7qb" (UniqueName: "kubernetes.io/projected/e8eac10f-27a6-4229-9281-ead753bf852d-kube-api-access-fv7qb") pod "keystone-f8f4-account-create-update-xlhb4" (UID: "e8eac10f-27a6-4229-9281-ead753bf852d") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.106358 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8eac10f-27a6-4229-9281-ead753bf852d-operator-scripts\") pod \"keystone-f8f4-account-create-update-xlhb4\" (UID: \"e8eac10f-27a6-4229-9281-ead753bf852d\") " pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.106563 4804 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.106583 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.106596 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.106609 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.106621 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.106696 4804 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.106750 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e8eac10f-27a6-4229-9281-ead753bf852d-operator-scripts podName:e8eac10f-27a6-4229-9281-ead753bf852d nodeName:}" failed. No retries permitted until 2026-01-28 11:45:38.606732171 +0000 UTC m=+1414.401612165 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e8eac10f-27a6-4229-9281-ead753bf852d-operator-scripts") pod "keystone-f8f4-account-create-update-xlhb4" (UID: "e8eac10f-27a6-4229-9281-ead753bf852d") : configmap "openstack-scripts" not found Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.121227 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data" (OuterVolumeSpecName: "config-data") pod "bb3c1e4d-637e-4de6-aa37-7daff5298b30" (UID: "bb3c1e4d-637e-4de6-aa37-7daff5298b30"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.140991 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "fcc0e969-75e0-4441-a805-7845261f1ad5" (UID: "fcc0e969-75e0-4441-a805-7845261f1ad5"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.142047 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "bb3c1e4d-637e-4de6-aa37-7daff5298b30" (UID: "bb3c1e4d-637e-4de6-aa37-7daff5298b30"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.162965 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "b0bfaf6b-2c74-4812-965a-4db80f0c4527" (UID: "b0bfaf6b-2c74-4812-965a-4db80f0c4527"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.171018 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "bb3c1e4d-637e-4de6-aa37-7daff5298b30" (UID: "bb3c1e4d-637e-4de6-aa37-7daff5298b30"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.171143 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "bb3c1e4d-637e-4de6-aa37-7daff5298b30" (UID: "bb3c1e4d-637e-4de6-aa37-7daff5298b30"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.217093 4804 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.217538 4804 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/fcc0e969-75e0-4441-a805-7845261f1ad5-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.217553 4804 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/b0bfaf6b-2c74-4812-965a-4db80f0c4527-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.217567 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.217682 4804 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.217701 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bb3c1e4d-637e-4de6-aa37-7daff5298b30-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.270592 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/openstack-galera-0" podUID="33d56e9c-416a-4816-81a7-8def89c20c8e" containerName="galera" containerID="cri-o://5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361" gracePeriod=30 Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.277294 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.307403 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-fv7qb operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openstack/keystone-f8f4-account-create-update-xlhb4" podUID="e8eac10f-27a6-4229-9281-ead753bf852d" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.322320 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.329093 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.330752 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.331974 4804 scope.go:117] "RemoveContainer" containerID="6e4df9959650dab13d28cc4f6579b5bbae4ec71560a9da89f85150e891a84a60" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.349756 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-0"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.360078 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.372452 4804 scope.go:117] "RemoveContainer" containerID="2bc2f4bef5b6e11721d8eabaa519e6625f7ff953fd015c6be0cebef1e6ec65fa" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.391820 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.417289 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-659f7cffd6-wm9cj"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.422872 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-internal-tls-certs\") pod \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.422930 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-public-tls-certs\") pod \"5198da96-d6b6-4b80-bb93-838dff10730e\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.422951 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data-custom\") pod \"878daeff-34bf-4dab-8118-e42c318849bb\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.422997 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4jlrd\" (UniqueName: \"kubernetes.io/projected/878daeff-34bf-4dab-8118-e42c318849bb-kube-api-access-4jlrd\") pod \"878daeff-34bf-4dab-8118-e42c318849bb\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423090 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/878daeff-34bf-4dab-8118-e42c318849bb-logs\") pod \"878daeff-34bf-4dab-8118-e42c318849bb\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423136 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-combined-ca-bundle\") pod \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423157 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-public-tls-certs\") pod \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423275 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data\") pod \"878daeff-34bf-4dab-8118-e42c318849bb\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423308 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-config-data\") pod \"5198da96-d6b6-4b80-bb93-838dff10730e\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423326 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-combined-ca-bundle\") pod \"878daeff-34bf-4dab-8118-e42c318849bb\" (UID: \"878daeff-34bf-4dab-8118-e42c318849bb\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423345 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-config-data\") pod \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423368 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-logs\") pod \"5198da96-d6b6-4b80-bb93-838dff10730e\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423388 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae0fb199-797a-40c6-8c71-3b5a976b6c61-logs\") pod \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423431 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-combined-ca-bundle\") pod \"5198da96-d6b6-4b80-bb93-838dff10730e\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423477 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qshqj\" (UniqueName: \"kubernetes.io/projected/5198da96-d6b6-4b80-bb93-838dff10730e-kube-api-access-qshqj\") pod \"5198da96-d6b6-4b80-bb93-838dff10730e\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423501 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4qm8l\" (UniqueName: \"kubernetes.io/projected/ae0fb199-797a-40c6-8c71-3b5a976b6c61-kube-api-access-4qm8l\") pod \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\" (UID: \"ae0fb199-797a-40c6-8c71-3b5a976b6c61\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423519 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-httpd-run\") pod \"5198da96-d6b6-4b80-bb93-838dff10730e\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423533 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"5198da96-d6b6-4b80-bb93-838dff10730e\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.423557 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-scripts\") pod \"5198da96-d6b6-4b80-bb93-838dff10730e\" (UID: \"5198da96-d6b6-4b80-bb93-838dff10730e\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.426625 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-659f7cffd6-wm9cj"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.430237 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ae0fb199-797a-40c6-8c71-3b5a976b6c61-kube-api-access-4qm8l" (OuterVolumeSpecName: "kube-api-access-4qm8l") pod "ae0fb199-797a-40c6-8c71-3b5a976b6c61" (UID: "ae0fb199-797a-40c6-8c71-3b5a976b6c61"). InnerVolumeSpecName "kube-api-access-4qm8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.434503 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "878daeff-34bf-4dab-8118-e42c318849bb" (UID: "878daeff-34bf-4dab-8118-e42c318849bb"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.436998 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5198da96-d6b6-4b80-bb93-838dff10730e" (UID: "5198da96-d6b6-4b80-bb93-838dff10730e"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.440102 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/878daeff-34bf-4dab-8118-e42c318849bb-kube-api-access-4jlrd" (OuterVolumeSpecName: "kube-api-access-4jlrd") pod "878daeff-34bf-4dab-8118-e42c318849bb" (UID: "878daeff-34bf-4dab-8118-e42c318849bb"). InnerVolumeSpecName "kube-api-access-4jlrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.455260 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5198da96-d6b6-4b80-bb93-838dff10730e-kube-api-access-qshqj" (OuterVolumeSpecName: "kube-api-access-qshqj") pod "5198da96-d6b6-4b80-bb93-838dff10730e" (UID: "5198da96-d6b6-4b80-bb93-838dff10730e"). InnerVolumeSpecName "kube-api-access-qshqj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.455870 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/878daeff-34bf-4dab-8118-e42c318849bb-logs" (OuterVolumeSpecName: "logs") pod "878daeff-34bf-4dab-8118-e42c318849bb" (UID: "878daeff-34bf-4dab-8118-e42c318849bb"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.455963 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ae0fb199-797a-40c6-8c71-3b5a976b6c61-logs" (OuterVolumeSpecName: "logs") pod "ae0fb199-797a-40c6-8c71-3b5a976b6c61" (UID: "ae0fb199-797a-40c6-8c71-3b5a976b6c61"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.456594 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance") pod "5198da96-d6b6-4b80-bb93-838dff10730e" (UID: "5198da96-d6b6-4b80-bb93-838dff10730e"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.457187 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-scripts" (OuterVolumeSpecName: "scripts") pod "5198da96-d6b6-4b80-bb93-838dff10730e" (UID: "5198da96-d6b6-4b80-bb93-838dff10730e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.458110 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-logs" (OuterVolumeSpecName: "logs") pod "5198da96-d6b6-4b80-bb93-838dff10730e" (UID: "5198da96-d6b6-4b80-bb93-838dff10730e"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.460280 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jqk9s" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.474563 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5198da96-d6b6-4b80-bb93-838dff10730e" (UID: "5198da96-d6b6-4b80-bb93-838dff10730e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.492436 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ae0fb199-797a-40c6-8c71-3b5a976b6c61" (UID: "ae0fb199-797a-40c6-8c71-3b5a976b6c61"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.503243 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "878daeff-34bf-4dab-8118-e42c318849bb" (UID: "878daeff-34bf-4dab-8118-e42c318849bb"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.508956 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-proxy-59fb5cbd47-wwqmq"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.519379 4804 scope.go:117] "RemoveContainer" containerID="54143a992b19966c4a0488e9860a42a6b4166527e948b9a61cc651fb19353896" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.524459 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-operator-scripts\") pod \"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8\" (UID: \"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.524738 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7pcnf\" (UniqueName: \"kubernetes.io/projected/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-kube-api-access-7pcnf\") pod \"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8\" (UID: \"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525120 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qshqj\" (UniqueName: \"kubernetes.io/projected/5198da96-d6b6-4b80-bb93-838dff10730e-kube-api-access-qshqj\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525132 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4qm8l\" (UniqueName: \"kubernetes.io/projected/ae0fb199-797a-40c6-8c71-3b5a976b6c61-kube-api-access-4qm8l\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525140 4804 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525160 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525169 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525178 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525186 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4jlrd\" (UniqueName: \"kubernetes.io/projected/878daeff-34bf-4dab-8118-e42c318849bb-kube-api-access-4jlrd\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525194 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/878daeff-34bf-4dab-8118-e42c318849bb-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525201 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525210 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525217 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5198da96-d6b6-4b80-bb93-838dff10730e-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525225 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ae0fb199-797a-40c6-8c71-3b5a976b6c61-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525233 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.525283 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" (UID: "be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.529058 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-proxy-59fb5cbd47-wwqmq"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.545513 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-7bd5b5bf44-5z4wx"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.550324 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-7bd5b5bf44-5z4wx"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.558191 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-kube-api-access-7pcnf" (OuterVolumeSpecName: "kube-api-access-7pcnf") pod "be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" (UID: "be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8"). InnerVolumeSpecName "kube-api-access-7pcnf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.588338 4804 scope.go:117] "RemoveContainer" containerID="fb225d372d964c0886efe717a1558213e14fe762f8d84d3188ad176da11be441" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.605087 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-config-data" (OuterVolumeSpecName: "config-data") pod "5198da96-d6b6-4b80-bb93-838dff10730e" (UID: "5198da96-d6b6-4b80-bb93-838dff10730e"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.612316 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "ae0fb199-797a-40c6-8c71-3b5a976b6c61" (UID: "ae0fb199-797a-40c6-8c71-3b5a976b6c61"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.634390 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv7qb\" (UniqueName: \"kubernetes.io/projected/e8eac10f-27a6-4229-9281-ead753bf852d-kube-api-access-fv7qb\") pod \"keystone-f8f4-account-create-update-xlhb4\" (UID: \"e8eac10f-27a6-4229-9281-ead753bf852d\") " pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.634617 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8eac10f-27a6-4229-9281-ead753bf852d-operator-scripts\") pod \"keystone-f8f4-account-create-update-xlhb4\" (UID: \"e8eac10f-27a6-4229-9281-ead753bf852d\") " pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.634768 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.634784 4804 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.634796 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.634806 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7pcnf\" (UniqueName: \"kubernetes.io/projected/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8-kube-api-access-7pcnf\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.634933 4804 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.634990 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e8eac10f-27a6-4229-9281-ead753bf852d-operator-scripts podName:e8eac10f-27a6-4229-9281-ead753bf852d nodeName:}" failed. No retries permitted until 2026-01-28 11:45:39.634973078 +0000 UTC m=+1415.429853062 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e8eac10f-27a6-4229-9281-ead753bf852d-operator-scripts") pod "keystone-f8f4-account-create-update-xlhb4" (UID: "e8eac10f-27a6-4229-9281-ead753bf852d") : configmap "openstack-scripts" not found Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.640371 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "5198da96-d6b6-4b80-bb93-838dff10730e" (UID: "5198da96-d6b6-4b80-bb93-838dff10730e"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.643120 4804 projected.go:194] Error preparing data for projected volume kube-api-access-fv7qb for pod openstack/keystone-f8f4-account-create-update-xlhb4: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.643187 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e8eac10f-27a6-4229-9281-ead753bf852d-kube-api-access-fv7qb podName:e8eac10f-27a6-4229-9281-ead753bf852d nodeName:}" failed. No retries permitted until 2026-01-28 11:45:39.643166033 +0000 UTC m=+1415.438046017 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-fv7qb" (UniqueName: "kubernetes.io/projected/e8eac10f-27a6-4229-9281-ead753bf852d-kube-api-access-fv7qb") pod "keystone-f8f4-account-create-update-xlhb4" (UID: "e8eac10f-27a6-4229-9281-ead753bf852d") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.643747 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data" (OuterVolumeSpecName: "config-data") pod "878daeff-34bf-4dab-8118-e42c318849bb" (UID: "878daeff-34bf-4dab-8118-e42c318849bb"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.649796 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "ae0fb199-797a-40c6-8c71-3b5a976b6c61" (UID: "ae0fb199-797a-40c6-8c71-3b5a976b6c61"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.656109 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.666023 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-config-data" (OuterVolumeSpecName: "config-data") pod "ae0fb199-797a-40c6-8c71-3b5a976b6c61" (UID: "ae0fb199-797a-40c6-8c71-3b5a976b6c61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.688528 4804 scope.go:117] "RemoveContainer" containerID="8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.717490 4804 scope.go:117] "RemoveContainer" containerID="55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.735977 4804 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.736006 4804 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/5198da96-d6b6-4b80-bb93-838dff10730e-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.736019 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/878daeff-34bf-4dab-8118-e42c318849bb-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.736034 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae0fb199-797a-40c6-8c71-3b5a976b6c61-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.736043 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.735975 4804 configmap.go:193] Couldn't get configMap openstack/rabbitmq-cell1-config-data: configmap "rabbitmq-cell1-config-data" not found Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.736140 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data podName:76d127f1-97d9-4552-9bdb-b3482a45951d nodeName:}" failed. No retries permitted until 2026-01-28 11:45:46.736124141 +0000 UTC m=+1422.531004125 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data") pod "rabbitmq-cell1-server-0" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d") : configmap "rabbitmq-cell1-config-data" not found Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.753539 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xtdr8" podUID="ec6a5a02-2cbe-421b-bcf5-54572e000f28" containerName="ovn-controller" probeResult="failure" output="command timed out" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.767681 4804 scope.go:117] "RemoveContainer" containerID="8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.774556 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e\": container with ID starting with 8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e not found: ID does not exist" containerID="8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.774605 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e"} err="failed to get container status \"8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e\": rpc error: code = NotFound desc = could not find container \"8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e\": container with ID starting with 8d0c8f53675abf685c18dad2530a650a414a1f6eeae32664e3b31f92ba60cc8e not found: ID does not exist" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.774635 4804 scope.go:117] "RemoveContainer" containerID="55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.775262 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389\": container with ID starting with 55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389 not found: ID does not exist" containerID="55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.775318 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389"} err="failed to get container status \"55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389\": rpc error: code = NotFound desc = could not find container \"55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389\": container with ID starting with 55abfcad22db2070d2bc24cf3ad45d4265ce61c90a41fde36fb0607c3dc76389 not found: ID does not exist" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.775337 4804 scope.go:117] "RemoveContainer" containerID="f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.778280 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.817464 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-xtdr8" podUID="ec6a5a02-2cbe-421b-bcf5-54572e000f28" containerName="ovn-controller" probeResult="failure" output=< Jan 28 11:45:38 crc kubenswrapper[4804]: ERROR - Failed to get connection status from ovn-controller, ovn-appctl exit status: 0 Jan 28 11:45:38 crc kubenswrapper[4804]: > Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.819305 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="87c8a05a13e5c4994ae379707a39a074a0eebbe05ff9792d9fd8e8f442678955" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.821327 4804 scope.go:117] "RemoveContainer" containerID="5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.821627 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="87c8a05a13e5c4994ae379707a39a074a0eebbe05ff9792d9fd8e8f442678955" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.823375 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="87c8a05a13e5c4994ae379707a39a074a0eebbe05ff9792d9fd8e8f442678955" cmd=["/usr/bin/pgrep","-r","DRST","nova-conductor"] Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.823412 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-cell1-conductor-0" podUID="8e88e9db-b96d-4009-a4e6-ccbb5be53f85" containerName="nova-cell1-conductor-conductor" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.829451 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/memcached-0" podUID="d47089ce-8b52-4bd3-a30e-04736fed01fc" containerName="memcached" probeResult="failure" output="dial tcp 10.217.0.105:11211: connect: connection refused" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.837193 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-logs\") pod \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.837306 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-combined-ca-bundle\") pod \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.837342 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbx7g\" (UniqueName: \"kubernetes.io/projected/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-kube-api-access-qbx7g\") pod \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.837402 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data\") pod \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.837448 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.837528 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts\") pod \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.837555 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-httpd-run\") pod \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.837576 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-internal-tls-certs\") pod \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\" (UID: \"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f\") " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.848242 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.855251 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-logs" (OuterVolumeSpecName: "logs") pod "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.855415 4804 scope.go:117] "RemoveContainer" containerID="f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.859284 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca\": container with ID starting with f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca not found: ID does not exist" containerID="f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.859320 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca"} err="failed to get container status \"f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca\": rpc error: code = NotFound desc = could not find container \"f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca\": container with ID starting with f14d899f6f5153708e4633baabd5104219bfb6d36d71493af9acba4ce67050ca not found: ID does not exist" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.859346 4804 scope.go:117] "RemoveContainer" containerID="5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06" Jan 28 11:45:38 crc kubenswrapper[4804]: E0128 11:45:38.860282 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06\": container with ID starting with 5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06 not found: ID does not exist" containerID="5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.860308 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06"} err="failed to get container status \"5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06\": rpc error: code = NotFound desc = could not find container \"5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06\": container with ID starting with 5966e644ea86e36317718849ddc7eca9927a3b83a24d7305f83a5163eb458b06 not found: ID does not exist" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.886154 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.892814 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-kube-api-access-qbx7g" (OuterVolumeSpecName: "kube-api-access-qbx7g") pod "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f"). InnerVolumeSpecName "kube-api-access-qbx7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.902631 4804 generic.go:334] "Generic (PLEG): container finished" podID="4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" containerID="9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63" exitCode=0 Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.903107 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts" (OuterVolumeSpecName: "scripts") pod "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.903122 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f","Type":"ContainerDied","Data":"9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63"} Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.903164 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.903195 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"4f5cdaa9-8b1d-44b2-bfe6-d986f680327f","Type":"ContainerDied","Data":"13f3f152dac9edae9ea4638a3a8d8a972d428663034fabf17665286ff2611f13"} Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.903221 4804 scope.go:117] "RemoveContainer" containerID="9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.906321 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.909201 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/root-account-create-update-jqk9s" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.910377 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/root-account-create-update-jqk9s" event={"ID":"be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8","Type":"ContainerDied","Data":"59c191ec61924ab2b5f8fefd52ae2f9680b75391edc58ed63a1c1c209e71f63c"} Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.910639 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data" (OuterVolumeSpecName: "config-data") pod "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.938349 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.940683 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.940710 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.940721 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbx7g\" (UniqueName: \"kubernetes.io/projected/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-kube-api-access-qbx7g\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.940730 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.940764 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.940773 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.940784 4804 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.965799 4804 scope.go:117] "RemoveContainer" containerID="53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.986254 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.990508 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" (UID: "4f5cdaa9-8b1d-44b2-bfe6-d986f680327f"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.996788 4804 generic.go:334] "Generic (PLEG): container finished" podID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerID="1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c" exitCode=0 Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.997044 4804 generic.go:334] "Generic (PLEG): container finished" podID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerID="4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4" exitCode=2 Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.997121 4804 generic.go:334] "Generic (PLEG): container finished" podID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerID="e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4" exitCode=0 Jan 28 11:45:38 crc kubenswrapper[4804]: I0128 11:45:38.999734 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-8f675b957-rm9qp" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.002127 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="280cd1a0-6761-425c-8de1-bec2307ba0c0" path="/var/lib/kubelet/pods/280cd1a0-6761-425c-8de1-bec2307ba0c0/volumes" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.010521 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54fa6273-e08e-4dbb-a86b-a8951e4100fa" path="/var/lib/kubelet/pods/54fa6273-e08e-4dbb-a86b-a8951e4100fa/volumes" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.011147 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="79faecc7-1388-420a-9eee-b47d0ce87f34" path="/var/lib/kubelet/pods/79faecc7-1388-420a-9eee-b47d0ce87f34/volumes" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.011680 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8686dbae-d7dd-4662-81a8-ab51cc85a115" path="/var/lib/kubelet/pods/8686dbae-d7dd-4662-81a8-ab51cc85a115/volumes" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.012168 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cb48af9-edd2-404a-9d56-afedbfa79f07" path="/var/lib/kubelet/pods/8cb48af9-edd2-404a-9d56-afedbfa79f07/volumes" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.019366 4804 generic.go:334] "Generic (PLEG): container finished" podID="d47089ce-8b52-4bd3-a30e-04736fed01fc" containerID="386c42bab4089fa2791b36fa5e66b769af00f0ef8e73fa961d1e6e6f38f01759" exitCode=0 Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.019475 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a4586997-59ed-4e13-b7ec-3146711f642c" path="/var/lib/kubelet/pods/a4586997-59ed-4e13-b7ec-3146711f642c/volumes" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.020195 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0bfaf6b-2c74-4812-965a-4db80f0c4527" path="/var/lib/kubelet/pods/b0bfaf6b-2c74-4812-965a-4db80f0c4527/volumes" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.020798 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bb3c1e4d-637e-4de6-aa37-7daff5298b30" path="/var/lib/kubelet/pods/bb3c1e4d-637e-4de6-aa37-7daff5298b30/volumes" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.029073 4804 generic.go:334] "Generic (PLEG): container finished" podID="6af777f5-5dfc-4f4d-b7c5-dd0de3f80def" containerID="ccddc2c43c4ec70519371e0d1f04a70d45a5a33973b05eef166b2e2189b30710" exitCode=2 Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.031131 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fcc0e969-75e0-4441-a805-7845261f1ad5" path="/var/lib/kubelet/pods/fcc0e969-75e0-4441-a805-7845261f1ad5/volumes" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.045018 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.046586 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"5198da96-d6b6-4b80-bb93-838dff10730e","Type":"ContainerDied","Data":"58bcb13d20697d7aea6b95393a84cbc41032eeedc92a545190a9ec6f060f3919"} Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.046622 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"ae0fb199-797a-40c6-8c71-3b5a976b6c61","Type":"ContainerDied","Data":"b68d102d0c8eb5fad0a64accd8eecc6866ce24c76046948f3cee122445962edf"} Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.046637 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90f5a2ef-6224-4af8-8bba-32c689a960f1","Type":"ContainerDied","Data":"1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c"} Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.046649 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90f5a2ef-6224-4af8-8bba-32c689a960f1","Type":"ContainerDied","Data":"4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4"} Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.046659 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90f5a2ef-6224-4af8-8bba-32c689a960f1","Type":"ContainerDied","Data":"e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4"} Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.046669 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-8f675b957-rm9qp" event={"ID":"878daeff-34bf-4dab-8118-e42c318849bb","Type":"ContainerDied","Data":"a5da49e2449f72f707c273e1370e4a7b62de12d82629f0770fb413435e71898d"} Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.046681 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d47089ce-8b52-4bd3-a30e-04736fed01fc","Type":"ContainerDied","Data":"386c42bab4089fa2791b36fa5e66b769af00f0ef8e73fa961d1e6e6f38f01759"} Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.046693 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def","Type":"ContainerDied","Data":"ccddc2c43c4ec70519371e0d1f04a70d45a5a33973b05eef166b2e2189b30710"} Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.047460 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.048856 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.048896 4804 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.153476 4804 configmap.go:193] Couldn't get configMap openstack/rabbitmq-config-data: configmap "rabbitmq-config-data" not found Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.153827 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data podName:f7c5c969-c4c2-4f76-b3c6-152473159e78 nodeName:}" failed. No retries permitted until 2026-01-28 11:45:47.153810882 +0000 UTC m=+1422.948690866 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-data" (UniqueName: "kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data") pod "rabbitmq-server-0" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78") : configmap "rabbitmq-config-data" not found Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.191244 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.198361 4804 scope.go:117] "RemoveContainer" containerID="9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63" Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.198794 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63\": container with ID starting with 9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63 not found: ID does not exist" containerID="9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.198957 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63"} err="failed to get container status \"9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63\": rpc error: code = NotFound desc = could not find container \"9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63\": container with ID starting with 9d73a794022ec6f2b08a1f8e95743f5cc1455431594ed5230884be8d08043c63 not found: ID does not exist" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.199113 4804 scope.go:117] "RemoveContainer" containerID="53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001" Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.199821 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001\": container with ID starting with 53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001 not found: ID does not exist" containerID="53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.199855 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001"} err="failed to get container status \"53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001\": rpc error: code = NotFound desc = could not find container \"53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001\": container with ID starting with 53201e8bf18665c74df88ff2cee30c859dbe469c99f3d9c5bfc7882d773aa001 not found: ID does not exist" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.199893 4804 scope.go:117] "RemoveContainer" containerID="d91ad709364755c2f101045dffe047be8b4a0f8b3fefadd5603f62974e04e888" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.210215 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.235002 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.253542 4804 scope.go:117] "RemoveContainer" containerID="ea5cc70522b8b244db30a0a1dd5bc4353ad8899e579dd2e9b1384915ac35e91e" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.298075 4804 scope.go:117] "RemoveContainer" containerID="49f9909506d8a2c0b51ffca97a1e1ce6efc0b0acde0bbf32f3a77e33e0c7d096" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.313023 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/root-account-create-update-jqk9s"] Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.341260 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/root-account-create-update-jqk9s"] Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.361071 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-combined-ca-bundle\") pod \"d47089ce-8b52-4bd3-a30e-04736fed01fc\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.361199 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-config-data\") pod \"d47089ce-8b52-4bd3-a30e-04736fed01fc\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.361227 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-combined-ca-bundle\") pod \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.361279 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-certs\") pod \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.361295 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-kolla-config\") pod \"d47089ce-8b52-4bd3-a30e-04736fed01fc\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.361325 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-memcached-tls-certs\") pod \"d47089ce-8b52-4bd3-a30e-04736fed01fc\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.361353 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-config\") pod \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.361379 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s6xhn\" (UniqueName: \"kubernetes.io/projected/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-api-access-s6xhn\") pod \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\" (UID: \"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def\") " Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.361397 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flxcf\" (UniqueName: \"kubernetes.io/projected/d47089ce-8b52-4bd3-a30e-04736fed01fc-kube-api-access-flxcf\") pod \"d47089ce-8b52-4bd3-a30e-04736fed01fc\" (UID: \"d47089ce-8b52-4bd3-a30e-04736fed01fc\") " Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.361840 4804 scope.go:117] "RemoveContainer" containerID="5fce3701f770e3f1d822ae3950ad420da6c9b44d0df68cf4bd4c8ebf86d62649" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.363588 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "d47089ce-8b52-4bd3-a30e-04736fed01fc" (UID: "d47089ce-8b52-4bd3-a30e-04736fed01fc"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.363965 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-config-data" (OuterVolumeSpecName: "config-data") pod "d47089ce-8b52-4bd3-a30e-04736fed01fc" (UID: "d47089ce-8b52-4bd3-a30e-04736fed01fc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.369514 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d47089ce-8b52-4bd3-a30e-04736fed01fc-kube-api-access-flxcf" (OuterVolumeSpecName: "kube-api-access-flxcf") pod "d47089ce-8b52-4bd3-a30e-04736fed01fc" (UID: "d47089ce-8b52-4bd3-a30e-04736fed01fc"). InnerVolumeSpecName "kube-api-access-flxcf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.369737 4804 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.369758 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flxcf\" (UniqueName: \"kubernetes.io/projected/d47089ce-8b52-4bd3-a30e-04736fed01fc-kube-api-access-flxcf\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.369769 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/d47089ce-8b52-4bd3-a30e-04736fed01fc-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.371861 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-api-access-s6xhn" (OuterVolumeSpecName: "kube-api-access-s6xhn") pod "6af777f5-5dfc-4f4d-b7c5-dd0de3f80def" (UID: "6af777f5-5dfc-4f4d-b7c5-dd0de3f80def"). InnerVolumeSpecName "kube-api-access-s6xhn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.393120 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.419525 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6af777f5-5dfc-4f4d-b7c5-dd0de3f80def" (UID: "6af777f5-5dfc-4f4d-b7c5-dd0de3f80def"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.420541 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.420993 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-memcached-tls-certs" (OuterVolumeSpecName: "memcached-tls-certs") pod "d47089ce-8b52-4bd3-a30e-04736fed01fc" (UID: "d47089ce-8b52-4bd3-a30e-04736fed01fc"). InnerVolumeSpecName "memcached-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.432438 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.442357 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.449180 4804 scope.go:117] "RemoveContainer" containerID="61e2afaf3a01a165673d5c42d38ef739fe857b1c1e03f62547614151ae809226" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.449329 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.451430 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d47089ce-8b52-4bd3-a30e-04736fed01fc" (UID: "d47089ce-8b52-4bd3-a30e-04736fed01fc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.457437 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.461426 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-config" (OuterVolumeSpecName: "kube-state-metrics-tls-config") pod "6af777f5-5dfc-4f4d-b7c5-dd0de3f80def" (UID: "6af777f5-5dfc-4f4d-b7c5-dd0de3f80def"). InnerVolumeSpecName "kube-state-metrics-tls-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.466410 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-worker-8f675b957-rm9qp"] Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.468161 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-worker-8f675b957-rm9qp"] Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.470648 4804 scope.go:117] "RemoveContainer" containerID="1144f29504fe6195fc342eb320a4b830871f3e9d0216c4ce9fc167121dce473e" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.471616 4804 reconciler_common.go:293] "Volume detached for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-memcached-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.471637 4804 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.471649 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s6xhn\" (UniqueName: \"kubernetes.io/projected/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-api-access-s6xhn\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.471661 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d47089ce-8b52-4bd3-a30e-04736fed01fc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.471670 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.490491 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-certs" (OuterVolumeSpecName: "kube-state-metrics-tls-certs") pod "6af777f5-5dfc-4f4d-b7c5-dd0de3f80def" (UID: "6af777f5-5dfc-4f4d-b7c5-dd0de3f80def"). InnerVolumeSpecName "kube-state-metrics-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.506940 4804 scope.go:117] "RemoveContainer" containerID="f4726c69a403b9a8eefc4f17886ef00a383e10ea26adf572bdfed7ea1d3723a8" Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.511801 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.512852 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.519255 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.519503 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="469a0049-480f-4cde-848d-4b11cb54130b" containerName="nova-scheduler-scheduler" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.573413 4804 reconciler_common.go:293] "Volume detached for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def-kube-state-metrics-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.674942 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv7qb\" (UniqueName: \"kubernetes.io/projected/e8eac10f-27a6-4229-9281-ead753bf852d-kube-api-access-fv7qb\") pod \"keystone-f8f4-account-create-update-xlhb4\" (UID: \"e8eac10f-27a6-4229-9281-ead753bf852d\") " pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:39 crc kubenswrapper[4804]: I0128 11:45:39.675075 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8eac10f-27a6-4229-9281-ead753bf852d-operator-scripts\") pod \"keystone-f8f4-account-create-update-xlhb4\" (UID: \"e8eac10f-27a6-4229-9281-ead753bf852d\") " pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.675179 4804 configmap.go:193] Couldn't get configMap openstack/openstack-scripts: configmap "openstack-scripts" not found Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.675229 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/e8eac10f-27a6-4229-9281-ead753bf852d-operator-scripts podName:e8eac10f-27a6-4229-9281-ead753bf852d nodeName:}" failed. No retries permitted until 2026-01-28 11:45:41.675214346 +0000 UTC m=+1417.470094330 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/e8eac10f-27a6-4229-9281-ead753bf852d-operator-scripts") pod "keystone-f8f4-account-create-update-xlhb4" (UID: "e8eac10f-27a6-4229-9281-ead753bf852d") : configmap "openstack-scripts" not found Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.680096 4804 projected.go:194] Error preparing data for projected volume kube-api-access-fv7qb for pod openstack/keystone-f8f4-account-create-update-xlhb4: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 28 11:45:39 crc kubenswrapper[4804]: E0128 11:45:39.680171 4804 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/e8eac10f-27a6-4229-9281-ead753bf852d-kube-api-access-fv7qb podName:e8eac10f-27a6-4229-9281-ead753bf852d nodeName:}" failed. No retries permitted until 2026-01-28 11:45:41.68015201 +0000 UTC m=+1417.475031994 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-fv7qb" (UniqueName: "kubernetes.io/projected/e8eac10f-27a6-4229-9281-ead753bf852d-kube-api-access-fv7qb") pod "keystone-f8f4-account-create-update-xlhb4" (UID: "e8eac10f-27a6-4229-9281-ead753bf852d") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.079546 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"d47089ce-8b52-4bd3-a30e-04736fed01fc","Type":"ContainerDied","Data":"86818d705a40c4508845f5e3530cd1a2ecd08917ac1287e69fd364a076602c00"} Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.079842 4804 scope.go:117] "RemoveContainer" containerID="386c42bab4089fa2791b36fa5e66b769af00f0ef8e73fa961d1e6e6f38f01759" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.079581 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.083834 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.084957 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"6af777f5-5dfc-4f4d-b7c5-dd0de3f80def","Type":"ContainerDied","Data":"21e20525ca7a6c58cab2832c14cfe80c2d4514f39f84f4eb3108c5f05572b1bf"} Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.112928 4804 generic.go:334] "Generic (PLEG): container finished" podID="76d127f1-97d9-4552-9bdb-b3482a45951d" containerID="a7bcd4c4937ab18a41cb4959a39743e78382843e721b78db4c0a6c20de518e0c" exitCode=0 Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.112995 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"76d127f1-97d9-4552-9bdb-b3482a45951d","Type":"ContainerDied","Data":"a7bcd4c4937ab18a41cb4959a39743e78382843e721b78db4c0a6c20de518e0c"} Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.138460 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.142974 4804 scope.go:117] "RemoveContainer" containerID="ccddc2c43c4ec70519371e0d1f04a70d45a5a33973b05eef166b2e2189b30710" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.145845 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_edcdd787-6628-49ee-abcf-0146c096f547/ovn-northd/0.log" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.145899 4804 generic.go:334] "Generic (PLEG): container finished" podID="edcdd787-6628-49ee-abcf-0146c096f547" containerID="1f6db044032b9ea275036a4c598039837713d6af1c8b750e39682cd377aa7e00" exitCode=139 Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.145955 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"edcdd787-6628-49ee-abcf-0146c096f547","Type":"ContainerDied","Data":"1f6db044032b9ea275036a4c598039837713d6af1c8b750e39682cd377aa7e00"} Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.159773 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.165683 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/memcached-0"] Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.167973 4804 generic.go:334] "Generic (PLEG): container finished" podID="f7c5c969-c4c2-4f76-b3c6-152473159e78" containerID="95dfda03211e6c344c512015a17826e376bdb3ad7fb59bc5821bb495def03e2b" exitCode=0 Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.168044 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-f8f4-account-create-update-xlhb4" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.168039 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7c5c969-c4c2-4f76-b3c6-152473159e78","Type":"ContainerDied","Data":"95dfda03211e6c344c512015a17826e376bdb3ad7fb59bc5821bb495def03e2b"} Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.171142 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/memcached-0"] Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.222765 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-f8f4-account-create-update-xlhb4"] Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.227392 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-f8f4-account-create-update-xlhb4"] Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.283685 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e8eac10f-27a6-4229-9281-ead753bf852d-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.283713 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fv7qb\" (UniqueName: \"kubernetes.io/projected/e8eac10f-27a6-4229-9281-ead753bf852d-kube-api-access-fv7qb\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.427261 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.488001 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-plugins-conf\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.488245 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-plugins\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.488304 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.488345 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76d127f1-97d9-4552-9bdb-b3482a45951d-erlang-cookie-secret\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.488408 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-confd\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.488441 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-tls\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.488470 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-erlang-cookie\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.488512 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76d127f1-97d9-4552-9bdb-b3482a45951d-pod-info\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.488582 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gzs9j\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-kube-api-access-gzs9j\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.488728 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-server-conf\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.488807 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.491247 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.491342 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.491872 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.504177 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/76d127f1-97d9-4552-9bdb-b3482a45951d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.504305 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.504728 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-kube-api-access-gzs9j" (OuterVolumeSpecName: "kube-api-access-gzs9j") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "kube-api-access-gzs9j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.504897 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "persistence") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.527088 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/76d127f1-97d9-4552-9bdb-b3482a45951d-pod-info" (OuterVolumeSpecName: "pod-info") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.527929 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data" (OuterVolumeSpecName: "config-data") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.565717 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-server-conf" (OuterVolumeSpecName: "server-conf") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.589635 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.590103 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-confd\") pod \"76d127f1-97d9-4552-9bdb-b3482a45951d\" (UID: \"76d127f1-97d9-4552-9bdb-b3482a45951d\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.590576 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.590596 4804 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.590607 4804 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.590617 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.590625 4804 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/76d127f1-97d9-4552-9bdb-b3482a45951d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.590635 4804 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.590644 4804 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.590652 4804 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/76d127f1-97d9-4552-9bdb-b3482a45951d-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.590663 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gzs9j\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-kube-api-access-gzs9j\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.590671 4804 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/76d127f1-97d9-4552-9bdb-b3482a45951d-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: W0128 11:45:40.590998 4804 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/76d127f1-97d9-4552-9bdb-b3482a45951d/volumes/kubernetes.io~projected/rabbitmq-confd Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.591019 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "76d127f1-97d9-4552-9bdb-b3482a45951d" (UID: "76d127f1-97d9-4552-9bdb-b3482a45951d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.606975 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.634650 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.653387 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_edcdd787-6628-49ee-abcf-0146c096f547/ovn-northd/0.log" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.653472 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.700414 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7c5c969-c4c2-4f76-b3c6-152473159e78-pod-info\") pod \"f7c5c969-c4c2-4f76-b3c6-152473159e78\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701015 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wqhxr\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-kube-api-access-wqhxr\") pod \"f7c5c969-c4c2-4f76-b3c6-152473159e78\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701041 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-server-conf\") pod \"f7c5c969-c4c2-4f76-b3c6-152473159e78\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701087 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-plugins\") pod \"f7c5c969-c4c2-4f76-b3c6-152473159e78\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701110 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"f7c5c969-c4c2-4f76-b3c6-152473159e78\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701140 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-metrics-certs-tls-certs\") pod \"edcdd787-6628-49ee-abcf-0146c096f547\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701181 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-ovn-northd-tls-certs\") pod \"edcdd787-6628-49ee-abcf-0146c096f547\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701202 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-tls\") pod \"f7c5c969-c4c2-4f76-b3c6-152473159e78\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701242 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jjlz\" (UniqueName: \"kubernetes.io/projected/edcdd787-6628-49ee-abcf-0146c096f547-kube-api-access-6jjlz\") pod \"edcdd787-6628-49ee-abcf-0146c096f547\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701260 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/edcdd787-6628-49ee-abcf-0146c096f547-ovn-rundir\") pod \"edcdd787-6628-49ee-abcf-0146c096f547\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701301 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-erlang-cookie\") pod \"f7c5c969-c4c2-4f76-b3c6-152473159e78\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701321 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-plugins-conf\") pod \"f7c5c969-c4c2-4f76-b3c6-152473159e78\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701368 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-combined-ca-bundle\") pod \"edcdd787-6628-49ee-abcf-0146c096f547\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701386 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7c5c969-c4c2-4f76-b3c6-152473159e78-erlang-cookie-secret\") pod \"f7c5c969-c4c2-4f76-b3c6-152473159e78\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701401 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-scripts\") pod \"edcdd787-6628-49ee-abcf-0146c096f547\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701452 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data\") pod \"f7c5c969-c4c2-4f76-b3c6-152473159e78\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701475 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-confd\") pod \"f7c5c969-c4c2-4f76-b3c6-152473159e78\" (UID: \"f7c5c969-c4c2-4f76-b3c6-152473159e78\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701522 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-config\") pod \"edcdd787-6628-49ee-abcf-0146c096f547\" (UID: \"edcdd787-6628-49ee-abcf-0146c096f547\") " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701799 4804 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/76d127f1-97d9-4552-9bdb-b3482a45951d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.701809 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.704688 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "f7c5c969-c4c2-4f76-b3c6-152473159e78" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.707832 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "f7c5c969-c4c2-4f76-b3c6-152473159e78" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.715150 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "f7c5c969-c4c2-4f76-b3c6-152473159e78" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.715532 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edcdd787-6628-49ee-abcf-0146c096f547-ovn-rundir" (OuterVolumeSpecName: "ovn-rundir") pod "edcdd787-6628-49ee-abcf-0146c096f547" (UID: "edcdd787-6628-49ee-abcf-0146c096f547"). InnerVolumeSpecName "ovn-rundir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.716112 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-config" (OuterVolumeSpecName: "config") pod "edcdd787-6628-49ee-abcf-0146c096f547" (UID: "edcdd787-6628-49ee-abcf-0146c096f547"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.716217 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "persistence") pod "f7c5c969-c4c2-4f76-b3c6-152473159e78" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.716279 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/f7c5c969-c4c2-4f76-b3c6-152473159e78-pod-info" (OuterVolumeSpecName: "pod-info") pod "f7c5c969-c4c2-4f76-b3c6-152473159e78" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.719173 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "f7c5c969-c4c2-4f76-b3c6-152473159e78" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.729642 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-kube-api-access-wqhxr" (OuterVolumeSpecName: "kube-api-access-wqhxr") pod "f7c5c969-c4c2-4f76-b3c6-152473159e78" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78"). InnerVolumeSpecName "kube-api-access-wqhxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.732160 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7c5c969-c4c2-4f76-b3c6-152473159e78-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "f7c5c969-c4c2-4f76-b3c6-152473159e78" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.737234 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-scripts" (OuterVolumeSpecName: "scripts") pod "edcdd787-6628-49ee-abcf-0146c096f547" (UID: "edcdd787-6628-49ee-abcf-0146c096f547"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.739617 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data" (OuterVolumeSpecName: "config-data") pod "f7c5c969-c4c2-4f76-b3c6-152473159e78" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.763083 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edcdd787-6628-49ee-abcf-0146c096f547-kube-api-access-6jjlz" (OuterVolumeSpecName: "kube-api-access-6jjlz") pod "edcdd787-6628-49ee-abcf-0146c096f547" (UID: "edcdd787-6628-49ee-abcf-0146c096f547"). InnerVolumeSpecName "kube-api-access-6jjlz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.777101 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-server-conf" (OuterVolumeSpecName: "server-conf") pod "f7c5c969-c4c2-4f76-b3c6-152473159e78" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.780249 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edcdd787-6628-49ee-abcf-0146c096f547" (UID: "edcdd787-6628-49ee-abcf-0146c096f547"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804086 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804128 4804 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/f7c5c969-c4c2-4f76-b3c6-152473159e78-pod-info\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804140 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wqhxr\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-kube-api-access-wqhxr\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804151 4804 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-server-conf\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804160 4804 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804191 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804200 4804 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804208 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jjlz\" (UniqueName: \"kubernetes.io/projected/edcdd787-6628-49ee-abcf-0146c096f547-kube-api-access-6jjlz\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804216 4804 reconciler_common.go:293] "Volume detached for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/edcdd787-6628-49ee-abcf-0146c096f547-ovn-rundir\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804226 4804 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804234 4804 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804241 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804250 4804 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/f7c5c969-c4c2-4f76-b3c6-152473159e78-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804258 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/edcdd787-6628-49ee-abcf-0146c096f547-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.804266 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/f7c5c969-c4c2-4f76-b3c6-152473159e78-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.808501 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-ovn-northd-tls-certs" (OuterVolumeSpecName: "ovn-northd-tls-certs") pod "edcdd787-6628-49ee-abcf-0146c096f547" (UID: "edcdd787-6628-49ee-abcf-0146c096f547"). InnerVolumeSpecName "ovn-northd-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.820428 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.821589 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-metrics-certs-tls-certs" (OuterVolumeSpecName: "metrics-certs-tls-certs") pod "edcdd787-6628-49ee-abcf-0146c096f547" (UID: "edcdd787-6628-49ee-abcf-0146c096f547"). InnerVolumeSpecName "metrics-certs-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.832076 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "f7c5c969-c4c2-4f76-b3c6-152473159e78" (UID: "f7c5c969-c4c2-4f76-b3c6-152473159e78"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.905824 4804 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/f7c5c969-c4c2-4f76-b3c6-152473159e78-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.905874 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.905904 4804 reconciler_common.go:293] "Volume detached for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-metrics-certs-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.905917 4804 reconciler_common.go:293] "Volume detached for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/edcdd787-6628-49ee-abcf-0146c096f547-ovn-northd-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.923508 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" path="/var/lib/kubelet/pods/4f5cdaa9-8b1d-44b2-bfe6-d986f680327f/volumes" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.924599 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5198da96-d6b6-4b80-bb93-838dff10730e" path="/var/lib/kubelet/pods/5198da96-d6b6-4b80-bb93-838dff10730e/volumes" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.925415 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6af777f5-5dfc-4f4d-b7c5-dd0de3f80def" path="/var/lib/kubelet/pods/6af777f5-5dfc-4f4d-b7c5-dd0de3f80def/volumes" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.926653 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="878daeff-34bf-4dab-8118-e42c318849bb" path="/var/lib/kubelet/pods/878daeff-34bf-4dab-8118-e42c318849bb/volumes" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.927445 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" path="/var/lib/kubelet/pods/ae0fb199-797a-40c6-8c71-3b5a976b6c61/volumes" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.928113 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" path="/var/lib/kubelet/pods/be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8/volumes" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.929354 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d47089ce-8b52-4bd3-a30e-04736fed01fc" path="/var/lib/kubelet/pods/d47089ce-8b52-4bd3-a30e-04736fed01fc/volumes" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.929801 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e8eac10f-27a6-4229-9281-ead753bf852d" path="/var/lib/kubelet/pods/e8eac10f-27a6-4229-9281-ead753bf852d/volumes" Jan 28 11:45:40 crc kubenswrapper[4804]: E0128 11:45:40.993624 4804 handlers.go:78] "Exec lifecycle hook for Container in Pod failed" err=< Jan 28 11:45:40 crc kubenswrapper[4804]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-28T11:45:33Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 28 11:45:40 crc kubenswrapper[4804]: /etc/init.d/functions: line 589: 414 Alarm clock "$@" Jan 28 11:45:40 crc kubenswrapper[4804]: > execCommand=["/usr/share/ovn/scripts/ovn-ctl","stop_controller"] containerName="ovn-controller" pod="openstack/ovn-controller-xtdr8" message=< Jan 28 11:45:40 crc kubenswrapper[4804]: Exiting ovn-controller (1) [FAILED] Jan 28 11:45:40 crc kubenswrapper[4804]: Killing ovn-controller (1) [ OK ] Jan 28 11:45:40 crc kubenswrapper[4804]: Killing ovn-controller (1) with SIGKILL [ OK ] Jan 28 11:45:40 crc kubenswrapper[4804]: 2026-01-28T11:45:33Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 28 11:45:40 crc kubenswrapper[4804]: /etc/init.d/functions: line 589: 414 Alarm clock "$@" Jan 28 11:45:40 crc kubenswrapper[4804]: > Jan 28 11:45:40 crc kubenswrapper[4804]: E0128 11:45:40.993663 4804 kuberuntime_container.go:691] "PreStop hook failed" err=< Jan 28 11:45:40 crc kubenswrapper[4804]: command '/usr/share/ovn/scripts/ovn-ctl stop_controller' exited with 137: 2026-01-28T11:45:33Z|00001|fatal_signal|WARN|terminating with signal 14 (Alarm clock) Jan 28 11:45:40 crc kubenswrapper[4804]: /etc/init.d/functions: line 589: 414 Alarm clock "$@" Jan 28 11:45:40 crc kubenswrapper[4804]: > pod="openstack/ovn-controller-xtdr8" podUID="ec6a5a02-2cbe-421b-bcf5-54572e000f28" containerName="ovn-controller" containerID="cri-o://4a2eea6008d67570b3d18ca463796d41c0886d498dab2d5b7ee01d2e5f0bd61d" Jan 28 11:45:40 crc kubenswrapper[4804]: I0128 11:45:40.993696 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ovn-controller-xtdr8" podUID="ec6a5a02-2cbe-421b-bcf5-54572e000f28" containerName="ovn-controller" containerID="cri-o://4a2eea6008d67570b3d18ca463796d41c0886d498dab2d5b7ee01d2e5f0bd61d" gracePeriod=20 Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.193927 4804 generic.go:334] "Generic (PLEG): container finished" podID="82ef8b43-de59-45f8-9c2a-765c5709054b" containerID="bcbdcf39ea5a39e34418c6ab9208339d9f7fde2eca3c37cbb5806710252cf88b" exitCode=0 Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.194018 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" event={"ID":"82ef8b43-de59-45f8-9c2a-765c5709054b","Type":"ContainerDied","Data":"bcbdcf39ea5a39e34418c6ab9208339d9f7fde2eca3c37cbb5806710252cf88b"} Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.200286 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"76d127f1-97d9-4552-9bdb-b3482a45951d","Type":"ContainerDied","Data":"304507b474cdd7086e7df033bc16291530ac6b5f55a2e85e565b86562e7fde59"} Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.200348 4804 scope.go:117] "RemoveContainer" containerID="a7bcd4c4937ab18a41cb4959a39743e78382843e721b78db4c0a6c20de518e0c" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.200521 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.220946 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_edcdd787-6628-49ee-abcf-0146c096f547/ovn-northd/0.log" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.221031 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"edcdd787-6628-49ee-abcf-0146c096f547","Type":"ContainerDied","Data":"1c34e1e54f29019381489766526d85a7ed81f51d7a176f0cfb6db1161fa7dad8"} Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.221151 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.228019 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"f7c5c969-c4c2-4f76-b3c6-152473159e78","Type":"ContainerDied","Data":"a5146612f4e2d80705681617c2e405b8c7dbe80637772da2d39bae9bb807359c"} Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.228108 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.233218 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-xtdr8_ec6a5a02-2cbe-421b-bcf5-54572e000f28/ovn-controller/0.log" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.233278 4804 generic.go:334] "Generic (PLEG): container finished" podID="ec6a5a02-2cbe-421b-bcf5-54572e000f28" containerID="4a2eea6008d67570b3d18ca463796d41c0886d498dab2d5b7ee01d2e5f0bd61d" exitCode=137 Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.233358 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xtdr8" event={"ID":"ec6a5a02-2cbe-421b-bcf5-54572e000f28","Type":"ContainerDied","Data":"4a2eea6008d67570b3d18ca463796d41c0886d498dab2d5b7ee01d2e5f0bd61d"} Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.239953 4804 generic.go:334] "Generic (PLEG): container finished" podID="8e88e9db-b96d-4009-a4e6-ccbb5be53f85" containerID="87c8a05a13e5c4994ae379707a39a074a0eebbe05ff9792d9fd8e8f442678955" exitCode=0 Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.240031 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8e88e9db-b96d-4009-a4e6-ccbb5be53f85","Type":"ContainerDied","Data":"87c8a05a13e5c4994ae379707a39a074a0eebbe05ff9792d9fd8e8f442678955"} Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.536093 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-xtdr8_ec6a5a02-2cbe-421b-bcf5-54572e000f28/ovn-controller/0.log" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.536365 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.551545 4804 scope.go:117] "RemoveContainer" containerID="938917cd0b60c23765326c3b0e216a34a5756c286f26d1223873445f92cad09a" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.560090 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.576839 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.588985 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.594700 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.600428 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-northd-0"] Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.600854 4804 scope.go:117] "RemoveContainer" containerID="17400e5f10254b0d771acc135458ad1381f04acdf3cc5817d31b6d3932b519f1" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.626413 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-ovn-controller-tls-certs\") pod \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.626526 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec6a5a02-2cbe-421b-bcf5-54572e000f28-scripts\") pod \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.626549 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-combined-ca-bundle\") pod \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.626572 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-98jpw\" (UniqueName: \"kubernetes.io/projected/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-kube-api-access-98jpw\") pod \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.626632 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-config-data\") pod \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\" (UID: \"8e88e9db-b96d-4009-a4e6-ccbb5be53f85\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.626690 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run\") pod \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.626727 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frdmr\" (UniqueName: \"kubernetes.io/projected/ec6a5a02-2cbe-421b-bcf5-54572e000f28-kube-api-access-frdmr\") pod \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.626779 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run-ovn\") pod \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.626814 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-log-ovn\") pod \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.626842 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-combined-ca-bundle\") pod \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\" (UID: \"ec6a5a02-2cbe-421b-bcf5-54572e000f28\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.627482 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "ec6a5a02-2cbe-421b-bcf5-54572e000f28" (UID: "ec6a5a02-2cbe-421b-bcf5-54572e000f28"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.627491 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run" (OuterVolumeSpecName: "var-run") pod "ec6a5a02-2cbe-421b-bcf5-54572e000f28" (UID: "ec6a5a02-2cbe-421b-bcf5-54572e000f28"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.627531 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "ec6a5a02-2cbe-421b-bcf5-54572e000f28" (UID: "ec6a5a02-2cbe-421b-bcf5-54572e000f28"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.628639 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ec6a5a02-2cbe-421b-bcf5-54572e000f28-scripts" (OuterVolumeSpecName: "scripts") pod "ec6a5a02-2cbe-421b-bcf5-54572e000f28" (UID: "ec6a5a02-2cbe-421b-bcf5-54572e000f28"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.632707 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.632747 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-kube-api-access-98jpw" (OuterVolumeSpecName: "kube-api-access-98jpw") pod "8e88e9db-b96d-4009-a4e6-ccbb5be53f85" (UID: "8e88e9db-b96d-4009-a4e6-ccbb5be53f85"). InnerVolumeSpecName "kube-api-access-98jpw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.641430 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec6a5a02-2cbe-421b-bcf5-54572e000f28-kube-api-access-frdmr" (OuterVolumeSpecName: "kube-api-access-frdmr") pod "ec6a5a02-2cbe-421b-bcf5-54572e000f28" (UID: "ec6a5a02-2cbe-421b-bcf5-54572e000f28"). InnerVolumeSpecName "kube-api-access-frdmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.648188 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.666377 4804 scope.go:117] "RemoveContainer" containerID="1f6db044032b9ea275036a4c598039837713d6af1c8b750e39682cd377aa7e00" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.684543 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-config-data" (OuterVolumeSpecName: "config-data") pod "8e88e9db-b96d-4009-a4e6-ccbb5be53f85" (UID: "8e88e9db-b96d-4009-a4e6-ccbb5be53f85"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.684594 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec6a5a02-2cbe-421b-bcf5-54572e000f28" (UID: "ec6a5a02-2cbe-421b-bcf5-54572e000f28"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.695181 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8e88e9db-b96d-4009-a4e6-ccbb5be53f85" (UID: "8e88e9db-b96d-4009-a4e6-ccbb5be53f85"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.713608 4804 scope.go:117] "RemoveContainer" containerID="95dfda03211e6c344c512015a17826e376bdb3ad7fb59bc5821bb495def03e2b" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.730323 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/ec6a5a02-2cbe-421b-bcf5-54572e000f28-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.730352 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.730366 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-98jpw\" (UniqueName: \"kubernetes.io/projected/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-kube-api-access-98jpw\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.730374 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8e88e9db-b96d-4009-a4e6-ccbb5be53f85-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.730383 4804 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.730390 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frdmr\" (UniqueName: \"kubernetes.io/projected/ec6a5a02-2cbe-421b-bcf5-54572e000f28-kube-api-access-frdmr\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.730398 4804 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.730405 4804 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/ec6a5a02-2cbe-421b-bcf5-54572e000f28-var-log-ovn\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.730413 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.759401 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-ovn-controller-tls-certs" (OuterVolumeSpecName: "ovn-controller-tls-certs") pod "ec6a5a02-2cbe-421b-bcf5-54572e000f28" (UID: "ec6a5a02-2cbe-421b-bcf5-54572e000f28"). InnerVolumeSpecName "ovn-controller-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.763233 4804 scope.go:117] "RemoveContainer" containerID="b936b1f85b5d914a16d472ff712a5db48c0674a29e82c956ccf023610946a7cb" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.780602 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.831137 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-sg-core-conf-yaml\") pod \"90f5a2ef-6224-4af8-8bba-32c689a960f1\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.831182 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-config-data\") pod \"90f5a2ef-6224-4af8-8bba-32c689a960f1\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.831203 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-run-httpd\") pod \"90f5a2ef-6224-4af8-8bba-32c689a960f1\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.831233 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ldd52\" (UniqueName: \"kubernetes.io/projected/90f5a2ef-6224-4af8-8bba-32c689a960f1-kube-api-access-ldd52\") pod \"90f5a2ef-6224-4af8-8bba-32c689a960f1\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.831250 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-scripts\") pod \"90f5a2ef-6224-4af8-8bba-32c689a960f1\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.831302 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-combined-ca-bundle\") pod \"90f5a2ef-6224-4af8-8bba-32c689a960f1\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.831348 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-log-httpd\") pod \"90f5a2ef-6224-4af8-8bba-32c689a960f1\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.831368 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-ceilometer-tls-certs\") pod \"90f5a2ef-6224-4af8-8bba-32c689a960f1\" (UID: \"90f5a2ef-6224-4af8-8bba-32c689a960f1\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.831632 4804 reconciler_common.go:293] "Volume detached for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/ec6a5a02-2cbe-421b-bcf5-54572e000f28-ovn-controller-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.832448 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "90f5a2ef-6224-4af8-8bba-32c689a960f1" (UID: "90f5a2ef-6224-4af8-8bba-32c689a960f1"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.833003 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "90f5a2ef-6224-4af8-8bba-32c689a960f1" (UID: "90f5a2ef-6224-4af8-8bba-32c689a960f1"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.833574 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.836831 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.837770 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-scripts" (OuterVolumeSpecName: "scripts") pod "90f5a2ef-6224-4af8-8bba-32c689a960f1" (UID: "90f5a2ef-6224-4af8-8bba-32c689a960f1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.838516 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90f5a2ef-6224-4af8-8bba-32c689a960f1-kube-api-access-ldd52" (OuterVolumeSpecName: "kube-api-access-ldd52") pod "90f5a2ef-6224-4af8-8bba-32c689a960f1" (UID: "90f5a2ef-6224-4af8-8bba-32c689a960f1"). InnerVolumeSpecName "kube-api-access-ldd52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.872811 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "90f5a2ef-6224-4af8-8bba-32c689a960f1" (UID: "90f5a2ef-6224-4af8-8bba-32c689a960f1"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.884491 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "90f5a2ef-6224-4af8-8bba-32c689a960f1" (UID: "90f5a2ef-6224-4af8-8bba-32c689a960f1"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.924068 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "90f5a2ef-6224-4af8-8bba-32c689a960f1" (UID: "90f5a2ef-6224-4af8-8bba-32c689a960f1"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.930266 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-config-data" (OuterVolumeSpecName: "config-data") pod "90f5a2ef-6224-4af8-8bba-32c689a960f1" (UID: "90f5a2ef-6224-4af8-8bba-32c689a960f1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.932758 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-scripts\") pod \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.932806 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-internal-tls-certs\") pod \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.932843 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-public-tls-certs\") pod \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.932865 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-combined-ca-bundle\") pod \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.932910 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82ef8b43-de59-45f8-9c2a-765c5709054b-logs\") pod \"82ef8b43-de59-45f8-9c2a-765c5709054b\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.932931 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dff5q\" (UniqueName: \"kubernetes.io/projected/82ef8b43-de59-45f8-9c2a-765c5709054b-kube-api-access-dff5q\") pod \"82ef8b43-de59-45f8-9c2a-765c5709054b\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.932946 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-fernet-keys\") pod \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.932978 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-config-data\") pod \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933001 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data\") pod \"82ef8b43-de59-45f8-9c2a-765c5709054b\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933056 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx5wp\" (UniqueName: \"kubernetes.io/projected/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-kube-api-access-qx5wp\") pod \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933078 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-combined-ca-bundle\") pod \"82ef8b43-de59-45f8-9c2a-765c5709054b\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933180 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-credential-keys\") pod \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\" (UID: \"4efe85dc-b64c-4cbe-83f7-89fa462a95a0\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933221 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data-custom\") pod \"82ef8b43-de59-45f8-9c2a-765c5709054b\" (UID: \"82ef8b43-de59-45f8-9c2a-765c5709054b\") " Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933540 4804 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933564 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933573 4804 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933581 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ldd52\" (UniqueName: \"kubernetes.io/projected/90f5a2ef-6224-4af8-8bba-32c689a960f1-kube-api-access-ldd52\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933606 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933616 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933625 4804 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/90f5a2ef-6224-4af8-8bba-32c689a960f1-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.933633 4804 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/90f5a2ef-6224-4af8-8bba-32c689a960f1-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.936181 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/82ef8b43-de59-45f8-9c2a-765c5709054b-kube-api-access-dff5q" (OuterVolumeSpecName: "kube-api-access-dff5q") pod "82ef8b43-de59-45f8-9c2a-765c5709054b" (UID: "82ef8b43-de59-45f8-9c2a-765c5709054b"). InnerVolumeSpecName "kube-api-access-dff5q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.936208 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/82ef8b43-de59-45f8-9c2a-765c5709054b-logs" (OuterVolumeSpecName: "logs") pod "82ef8b43-de59-45f8-9c2a-765c5709054b" (UID: "82ef8b43-de59-45f8-9c2a-765c5709054b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.937263 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "4efe85dc-b64c-4cbe-83f7-89fa462a95a0" (UID: "4efe85dc-b64c-4cbe-83f7-89fa462a95a0"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.937579 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "4efe85dc-b64c-4cbe-83f7-89fa462a95a0" (UID: "4efe85dc-b64c-4cbe-83f7-89fa462a95a0"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.937650 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-scripts" (OuterVolumeSpecName: "scripts") pod "4efe85dc-b64c-4cbe-83f7-89fa462a95a0" (UID: "4efe85dc-b64c-4cbe-83f7-89fa462a95a0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.938049 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-kube-api-access-qx5wp" (OuterVolumeSpecName: "kube-api-access-qx5wp") pod "4efe85dc-b64c-4cbe-83f7-89fa462a95a0" (UID: "4efe85dc-b64c-4cbe-83f7-89fa462a95a0"). InnerVolumeSpecName "kube-api-access-qx5wp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.939270 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "82ef8b43-de59-45f8-9c2a-765c5709054b" (UID: "82ef8b43-de59-45f8-9c2a-765c5709054b"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.957959 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "82ef8b43-de59-45f8-9c2a-765c5709054b" (UID: "82ef8b43-de59-45f8-9c2a-765c5709054b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.960561 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-config-data" (OuterVolumeSpecName: "config-data") pod "4efe85dc-b64c-4cbe-83f7-89fa462a95a0" (UID: "4efe85dc-b64c-4cbe-83f7-89fa462a95a0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.961528 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4efe85dc-b64c-4cbe-83f7-89fa462a95a0" (UID: "4efe85dc-b64c-4cbe-83f7-89fa462a95a0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.989194 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "4efe85dc-b64c-4cbe-83f7-89fa462a95a0" (UID: "4efe85dc-b64c-4cbe-83f7-89fa462a95a0"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.989245 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "4efe85dc-b64c-4cbe-83f7-89fa462a95a0" (UID: "4efe85dc-b64c-4cbe-83f7-89fa462a95a0"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:41 crc kubenswrapper[4804]: I0128 11:45:41.991821 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data" (OuterVolumeSpecName: "config-data") pod "82ef8b43-de59-45f8-9c2a-765c5709054b" (UID: "82ef8b43-de59-45f8-9c2a-765c5709054b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035122 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035157 4804 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035168 4804 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035176 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035184 4804 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/82ef8b43-de59-45f8-9c2a-765c5709054b-logs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035192 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dff5q\" (UniqueName: \"kubernetes.io/projected/82ef8b43-de59-45f8-9c2a-765c5709054b-kube-api-access-dff5q\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035202 4804 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035210 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035220 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035229 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx5wp\" (UniqueName: \"kubernetes.io/projected/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-kube-api-access-qx5wp\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035237 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035245 4804 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/4efe85dc-b64c-4cbe-83f7-89fa462a95a0-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.035253 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/82ef8b43-de59-45f8-9c2a-765c5709054b-config-data-custom\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.227746 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.286412 4804 generic.go:334] "Generic (PLEG): container finished" podID="4efe85dc-b64c-4cbe-83f7-89fa462a95a0" containerID="31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3" exitCode=0 Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.286567 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6f885d959c-vhjh4" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.286618 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f885d959c-vhjh4" event={"ID":"4efe85dc-b64c-4cbe-83f7-89fa462a95a0","Type":"ContainerDied","Data":"31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3"} Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.286660 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6f885d959c-vhjh4" event={"ID":"4efe85dc-b64c-4cbe-83f7-89fa462a95a0","Type":"ContainerDied","Data":"7c39859c40631f277cb9db7ae157687f468c42e18dd7308227c1bac58d71a744"} Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.286679 4804 scope.go:117] "RemoveContainer" containerID="31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.288249 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"8e88e9db-b96d-4009-a4e6-ccbb5be53f85","Type":"ContainerDied","Data":"7a39a79e7a20e9ba4fa85ecd18b271a0dbca751974fc6c0f7c6352d267b04dea"} Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.288304 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.297174 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" event={"ID":"82ef8b43-de59-45f8-9c2a-765c5709054b","Type":"ContainerDied","Data":"0556907b161f5a19bd7e76c946764eabb51dab90af80f30118fa8d78582a879a"} Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.297243 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-5f7496d4bd-26fnt" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.302789 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-xtdr8_ec6a5a02-2cbe-421b-bcf5-54572e000f28/ovn-controller/0.log" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.302918 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-xtdr8" event={"ID":"ec6a5a02-2cbe-421b-bcf5-54572e000f28","Type":"ContainerDied","Data":"33b738bafa7ea125cb6f8e21be749a37e8dc0b050b5dffa31b3e9875c08ddd2d"} Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.303095 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-xtdr8" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.310445 4804 generic.go:334] "Generic (PLEG): container finished" podID="469a0049-480f-4cde-848d-4b11cb54130b" containerID="df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a" exitCode=0 Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.310493 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.310545 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"469a0049-480f-4cde-848d-4b11cb54130b","Type":"ContainerDied","Data":"df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a"} Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.310572 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"469a0049-480f-4cde-848d-4b11cb54130b","Type":"ContainerDied","Data":"0f20d09f4e22850dccdafc066e7822cd90278816628e2fe4c307f19e6234a0ef"} Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.314642 4804 generic.go:334] "Generic (PLEG): container finished" podID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerID="e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342" exitCode=0 Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.314755 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90f5a2ef-6224-4af8-8bba-32c689a960f1","Type":"ContainerDied","Data":"e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342"} Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.314784 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"90f5a2ef-6224-4af8-8bba-32c689a960f1","Type":"ContainerDied","Data":"84fd0aed08998b6cb545affdc4c5c0c2a24e6d8450e334aba33aae6ec80b288a"} Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.314758 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.334778 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.338619 4804 scope.go:117] "RemoveContainer" containerID="31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3" Jan 28 11:45:42 crc kubenswrapper[4804]: E0128 11:45:42.340224 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3\": container with ID starting with 31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3 not found: ID does not exist" containerID="31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.340265 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3"} err="failed to get container status \"31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3\": rpc error: code = NotFound desc = could not find container \"31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3\": container with ID starting with 31328681fa7161caf269e5e4ef63f5dc67d86cbbe17f890dd998179c827c6df3 not found: ID does not exist" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.340312 4804 scope.go:117] "RemoveContainer" containerID="87c8a05a13e5c4994ae379707a39a074a0eebbe05ff9792d9fd8e8f442678955" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.341428 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-config-data\") pod \"469a0049-480f-4cde-848d-4b11cb54130b\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.341629 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-combined-ca-bundle\") pod \"469a0049-480f-4cde-848d-4b11cb54130b\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.341804 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xqdj4\" (UniqueName: \"kubernetes.io/projected/469a0049-480f-4cde-848d-4b11cb54130b-kube-api-access-xqdj4\") pod \"469a0049-480f-4cde-848d-4b11cb54130b\" (UID: \"469a0049-480f-4cde-848d-4b11cb54130b\") " Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.344269 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-0"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.359733 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-6f885d959c-vhjh4"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.369582 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-6f885d959c-vhjh4"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.376604 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-keystone-listener-5f7496d4bd-26fnt"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.376612 4804 scope.go:117] "RemoveContainer" containerID="bcbdcf39ea5a39e34418c6ab9208339d9f7fde2eca3c37cbb5806710252cf88b" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.389862 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/469a0049-480f-4cde-848d-4b11cb54130b-kube-api-access-xqdj4" (OuterVolumeSpecName: "kube-api-access-xqdj4") pod "469a0049-480f-4cde-848d-4b11cb54130b" (UID: "469a0049-480f-4cde-848d-4b11cb54130b"). InnerVolumeSpecName "kube-api-access-xqdj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.432259 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-keystone-listener-5f7496d4bd-26fnt"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.433764 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-config-data" (OuterVolumeSpecName: "config-data") pod "469a0049-480f-4cde-848d-4b11cb54130b" (UID: "469a0049-480f-4cde-848d-4b11cb54130b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.433788 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "469a0049-480f-4cde-848d-4b11cb54130b" (UID: "469a0049-480f-4cde-848d-4b11cb54130b"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.440489 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-xtdr8"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.440567 4804 scope.go:117] "RemoveContainer" containerID="1fe685e535efd281a9b4cf9713641d9161c23425d8abe0134248a2395c6b7208" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.448911 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-xtdr8"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.451803 4804 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-config-data\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.451825 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/469a0049-480f-4cde-848d-4b11cb54130b-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.451838 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xqdj4\" (UniqueName: \"kubernetes.io/projected/469a0049-480f-4cde-848d-4b11cb54130b-kube-api-access-xqdj4\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.457055 4804 scope.go:117] "RemoveContainer" containerID="4a2eea6008d67570b3d18ca463796d41c0886d498dab2d5b7ee01d2e5f0bd61d" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.457176 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.463841 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.484515 4804 scope.go:117] "RemoveContainer" containerID="df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.500319 4804 scope.go:117] "RemoveContainer" containerID="df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a" Jan 28 11:45:42 crc kubenswrapper[4804]: E0128 11:45:42.500672 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a\": container with ID starting with df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a not found: ID does not exist" containerID="df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.500704 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a"} err="failed to get container status \"df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a\": rpc error: code = NotFound desc = could not find container \"df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a\": container with ID starting with df4dfb42a561a4a21602effef58fd7c2f7da1b51d324120396987e2543cc9f0a not found: ID does not exist" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.500725 4804 scope.go:117] "RemoveContainer" containerID="1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.520023 4804 scope.go:117] "RemoveContainer" containerID="4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.538126 4804 scope.go:117] "RemoveContainer" containerID="e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.555482 4804 scope.go:117] "RemoveContainer" containerID="e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.570730 4804 scope.go:117] "RemoveContainer" containerID="1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c" Jan 28 11:45:42 crc kubenswrapper[4804]: E0128 11:45:42.571185 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c\": container with ID starting with 1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c not found: ID does not exist" containerID="1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.571218 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c"} err="failed to get container status \"1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c\": rpc error: code = NotFound desc = could not find container \"1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c\": container with ID starting with 1b6de4c9cc02c827b829469281fc4722107e56b40ea61861f181ef818c321b8c not found: ID does not exist" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.571244 4804 scope.go:117] "RemoveContainer" containerID="4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4" Jan 28 11:45:42 crc kubenswrapper[4804]: E0128 11:45:42.571557 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4\": container with ID starting with 4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4 not found: ID does not exist" containerID="4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.571657 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4"} err="failed to get container status \"4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4\": rpc error: code = NotFound desc = could not find container \"4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4\": container with ID starting with 4edb0114299087e8f738adc66a2d6c2e41b2df09da4a6bad80492c5468200cc4 not found: ID does not exist" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.571807 4804 scope.go:117] "RemoveContainer" containerID="e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342" Jan 28 11:45:42 crc kubenswrapper[4804]: E0128 11:45:42.572308 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342\": container with ID starting with e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342 not found: ID does not exist" containerID="e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.572361 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342"} err="failed to get container status \"e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342\": rpc error: code = NotFound desc = could not find container \"e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342\": container with ID starting with e821803cb3d4cb7d069a3d1aae8d52ecea0d86fbbe11ffb71ec41e725de3e342 not found: ID does not exist" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.572395 4804 scope.go:117] "RemoveContainer" containerID="e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4" Jan 28 11:45:42 crc kubenswrapper[4804]: E0128 11:45:42.574107 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4\": container with ID starting with e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4 not found: ID does not exist" containerID="e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.574208 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4"} err="failed to get container status \"e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4\": rpc error: code = NotFound desc = could not find container \"e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4\": container with ID starting with e61b3b09ee1f53d4a174a5714924ce3655de93654ea0547c402f1704cd47e3b4 not found: ID does not exist" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.649334 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.657858 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.924181 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="469a0049-480f-4cde-848d-4b11cb54130b" path="/var/lib/kubelet/pods/469a0049-480f-4cde-848d-4b11cb54130b/volumes" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.924705 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4efe85dc-b64c-4cbe-83f7-89fa462a95a0" path="/var/lib/kubelet/pods/4efe85dc-b64c-4cbe-83f7-89fa462a95a0/volumes" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.926080 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="76d127f1-97d9-4552-9bdb-b3482a45951d" path="/var/lib/kubelet/pods/76d127f1-97d9-4552-9bdb-b3482a45951d/volumes" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.927179 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="82ef8b43-de59-45f8-9c2a-765c5709054b" path="/var/lib/kubelet/pods/82ef8b43-de59-45f8-9c2a-765c5709054b/volumes" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.927787 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e88e9db-b96d-4009-a4e6-ccbb5be53f85" path="/var/lib/kubelet/pods/8e88e9db-b96d-4009-a4e6-ccbb5be53f85/volumes" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.928334 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" path="/var/lib/kubelet/pods/90f5a2ef-6224-4af8-8bba-32c689a960f1/volumes" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.929461 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec6a5a02-2cbe-421b-bcf5-54572e000f28" path="/var/lib/kubelet/pods/ec6a5a02-2cbe-421b-bcf5-54572e000f28/volumes" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.930136 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edcdd787-6628-49ee-abcf-0146c096f547" path="/var/lib/kubelet/pods/edcdd787-6628-49ee-abcf-0146c096f547/volumes" Jan 28 11:45:42 crc kubenswrapper[4804]: I0128 11:45:42.931452 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7c5c969-c4c2-4f76-b3c6-152473159e78" path="/var/lib/kubelet/pods/f7c5c969-c4c2-4f76-b3c6-152473159e78/volumes" Jan 28 11:45:43 crc kubenswrapper[4804]: E0128 11:45:43.065653 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:43 crc kubenswrapper[4804]: E0128 11:45:43.066579 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:43 crc kubenswrapper[4804]: E0128 11:45:43.066749 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:43 crc kubenswrapper[4804]: E0128 11:45:43.067012 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:43 crc kubenswrapper[4804]: E0128 11:45:43.067064 4804 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server" Jan 28 11:45:43 crc kubenswrapper[4804]: E0128 11:45:43.070181 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:43 crc kubenswrapper[4804]: E0128 11:45:43.075894 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:43 crc kubenswrapper[4804]: E0128 11:45:43.076004 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovs-vswitchd" Jan 28 11:45:47 crc kubenswrapper[4804]: E0128 11:45:47.156701 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 11:45:47 crc kubenswrapper[4804]: E0128 11:45:47.158274 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 11:45:47 crc kubenswrapper[4804]: E0128 11:45:47.159296 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 11:45:47 crc kubenswrapper[4804]: E0128 11:45:47.159342 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="33d56e9c-416a-4816-81a7-8def89c20c8e" containerName="galera" Jan 28 11:45:48 crc kubenswrapper[4804]: E0128 11:45:48.065187 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:48 crc kubenswrapper[4804]: E0128 11:45:48.065685 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:48 crc kubenswrapper[4804]: E0128 11:45:48.066119 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:48 crc kubenswrapper[4804]: E0128 11:45:48.066193 4804 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server" Jan 28 11:45:48 crc kubenswrapper[4804]: E0128 11:45:48.066626 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:48 crc kubenswrapper[4804]: E0128 11:45:48.068427 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:48 crc kubenswrapper[4804]: E0128 11:45:48.069668 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:48 crc kubenswrapper[4804]: E0128 11:45:48.069708 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovs-vswitchd" Jan 28 11:45:51 crc kubenswrapper[4804]: I0128 11:45:51.433551 4804 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/neutron-7d88fd9b89-w66bx" podUID="095bc753-88c4-456c-a3ae-aa0040a76338" containerName="neutron-httpd" probeResult="failure" output="Get \"https://10.217.0.167:9696/\": dial tcp 10.217.0.167:9696: connect: connection refused" Jan 28 11:45:53 crc kubenswrapper[4804]: E0128 11:45:53.065547 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:53 crc kubenswrapper[4804]: E0128 11:45:53.066483 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:53 crc kubenswrapper[4804]: E0128 11:45:53.066860 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:53 crc kubenswrapper[4804]: E0128 11:45:53.067215 4804 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server" Jan 28 11:45:53 crc kubenswrapper[4804]: E0128 11:45:53.067908 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:53 crc kubenswrapper[4804]: E0128 11:45:53.075221 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:53 crc kubenswrapper[4804]: E0128 11:45:53.077358 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:53 crc kubenswrapper[4804]: E0128 11:45:53.077464 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovs-vswitchd" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.405766 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.441970 4804 generic.go:334] "Generic (PLEG): container finished" podID="095bc753-88c4-456c-a3ae-aa0040a76338" containerID="5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe" exitCode=0 Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.442021 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d88fd9b89-w66bx" event={"ID":"095bc753-88c4-456c-a3ae-aa0040a76338","Type":"ContainerDied","Data":"5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe"} Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.442057 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-7d88fd9b89-w66bx" event={"ID":"095bc753-88c4-456c-a3ae-aa0040a76338","Type":"ContainerDied","Data":"d66804f71c7164aff0af828551ca8929bfd4e365e7c25ea56443ca4b0d53463e"} Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.442080 4804 scope.go:117] "RemoveContainer" containerID="789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.442143 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-7d88fd9b89-w66bx" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.468223 4804 scope.go:117] "RemoveContainer" containerID="5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.494753 4804 scope.go:117] "RemoveContainer" containerID="789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f" Jan 28 11:45:55 crc kubenswrapper[4804]: E0128 11:45:55.495232 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f\": container with ID starting with 789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f not found: ID does not exist" containerID="789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.495269 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f"} err="failed to get container status \"789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f\": rpc error: code = NotFound desc = could not find container \"789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f\": container with ID starting with 789fe338d88e77eacdc56d29abb08e80768c170c7967f986d668147cc5e6a90f not found: ID does not exist" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.495294 4804 scope.go:117] "RemoveContainer" containerID="5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe" Jan 28 11:45:55 crc kubenswrapper[4804]: E0128 11:45:55.495531 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe\": container with ID starting with 5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe not found: ID does not exist" containerID="5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.495559 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe"} err="failed to get container status \"5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe\": rpc error: code = NotFound desc = could not find container \"5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe\": container with ID starting with 5e26100cafad3396e969dee974cfa3017817c3108d2ceb44f1b8669646ef1dfe not found: ID does not exist" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.542533 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9c5k\" (UniqueName: \"kubernetes.io/projected/095bc753-88c4-456c-a3ae-aa0040a76338-kube-api-access-q9c5k\") pod \"095bc753-88c4-456c-a3ae-aa0040a76338\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.542645 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-internal-tls-certs\") pod \"095bc753-88c4-456c-a3ae-aa0040a76338\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.542803 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-ovndb-tls-certs\") pod \"095bc753-88c4-456c-a3ae-aa0040a76338\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.542855 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-config\") pod \"095bc753-88c4-456c-a3ae-aa0040a76338\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.542899 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-httpd-config\") pod \"095bc753-88c4-456c-a3ae-aa0040a76338\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.542937 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-public-tls-certs\") pod \"095bc753-88c4-456c-a3ae-aa0040a76338\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.542975 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-combined-ca-bundle\") pod \"095bc753-88c4-456c-a3ae-aa0040a76338\" (UID: \"095bc753-88c4-456c-a3ae-aa0040a76338\") " Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.550372 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/095bc753-88c4-456c-a3ae-aa0040a76338-kube-api-access-q9c5k" (OuterVolumeSpecName: "kube-api-access-q9c5k") pod "095bc753-88c4-456c-a3ae-aa0040a76338" (UID: "095bc753-88c4-456c-a3ae-aa0040a76338"). InnerVolumeSpecName "kube-api-access-q9c5k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.560215 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "095bc753-88c4-456c-a3ae-aa0040a76338" (UID: "095bc753-88c4-456c-a3ae-aa0040a76338"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.583118 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-config" (OuterVolumeSpecName: "config") pod "095bc753-88c4-456c-a3ae-aa0040a76338" (UID: "095bc753-88c4-456c-a3ae-aa0040a76338"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.587220 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "095bc753-88c4-456c-a3ae-aa0040a76338" (UID: "095bc753-88c4-456c-a3ae-aa0040a76338"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.591090 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "095bc753-88c4-456c-a3ae-aa0040a76338" (UID: "095bc753-88c4-456c-a3ae-aa0040a76338"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.600466 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "095bc753-88c4-456c-a3ae-aa0040a76338" (UID: "095bc753-88c4-456c-a3ae-aa0040a76338"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.616818 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "095bc753-88c4-456c-a3ae-aa0040a76338" (UID: "095bc753-88c4-456c-a3ae-aa0040a76338"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.644189 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9c5k\" (UniqueName: \"kubernetes.io/projected/095bc753-88c4-456c-a3ae-aa0040a76338-kube-api-access-q9c5k\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.644228 4804 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.644239 4804 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.644248 4804 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.644257 4804 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-httpd-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.644266 4804 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.644277 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/095bc753-88c4-456c-a3ae-aa0040a76338-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.781770 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-7d88fd9b89-w66bx"] Jan 28 11:45:55 crc kubenswrapper[4804]: I0128 11:45:55.789023 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-7d88fd9b89-w66bx"] Jan 28 11:45:56 crc kubenswrapper[4804]: I0128 11:45:56.927588 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="095bc753-88c4-456c-a3ae-aa0040a76338" path="/var/lib/kubelet/pods/095bc753-88c4-456c-a3ae-aa0040a76338/volumes" Jan 28 11:45:57 crc kubenswrapper[4804]: E0128 11:45:57.157066 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 11:45:57 crc kubenswrapper[4804]: E0128 11:45:57.158375 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 11:45:57 crc kubenswrapper[4804]: E0128 11:45:57.159739 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361" cmd=["/bin/bash","/var/lib/operator-scripts/mysql_probe.sh","readiness"] Jan 28 11:45:57 crc kubenswrapper[4804]: E0128 11:45:57.159824 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/openstack-galera-0" podUID="33d56e9c-416a-4816-81a7-8def89c20c8e" containerName="galera" Jan 28 11:45:58 crc kubenswrapper[4804]: E0128 11:45:58.065403 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:58 crc kubenswrapper[4804]: E0128 11:45:58.066153 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:58 crc kubenswrapper[4804]: E0128 11:45:58.066544 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" cmd=["/usr/local/bin/container-scripts/ovsdb_server_readiness.sh"] Jan 28 11:45:58 crc kubenswrapper[4804]: E0128 11:45:58.066639 4804 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 is running failed: container process not found" probeType="Readiness" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server" Jan 28 11:45:58 crc kubenswrapper[4804]: E0128 11:45:58.068182 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:58 crc kubenswrapper[4804]: E0128 11:45:58.070441 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:58 crc kubenswrapper[4804]: E0128 11:45:58.072057 4804 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" cmd=["/usr/local/bin/container-scripts/vswitchd_readiness.sh"] Jan 28 11:45:58 crc kubenswrapper[4804]: E0128 11:45:58.072091 4804 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/ovn-controller-ovs-pfzkj" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovs-vswitchd" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.115399 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.282442 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-generated\") pod \"33d56e9c-416a-4816-81a7-8def89c20c8e\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.282491 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-default\") pod \"33d56e9c-416a-4816-81a7-8def89c20c8e\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.282539 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-galera-tls-certs\") pod \"33d56e9c-416a-4816-81a7-8def89c20c8e\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.282577 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-kolla-config\") pod \"33d56e9c-416a-4816-81a7-8def89c20c8e\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.282616 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"33d56e9c-416a-4816-81a7-8def89c20c8e\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.282637 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-combined-ca-bundle\") pod \"33d56e9c-416a-4816-81a7-8def89c20c8e\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.282703 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-operator-scripts\") pod \"33d56e9c-416a-4816-81a7-8def89c20c8e\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.282739 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fb78m\" (UniqueName: \"kubernetes.io/projected/33d56e9c-416a-4816-81a7-8def89c20c8e-kube-api-access-fb78m\") pod \"33d56e9c-416a-4816-81a7-8def89c20c8e\" (UID: \"33d56e9c-416a-4816-81a7-8def89c20c8e\") " Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.283561 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "33d56e9c-416a-4816-81a7-8def89c20c8e" (UID: "33d56e9c-416a-4816-81a7-8def89c20c8e"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.283613 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "33d56e9c-416a-4816-81a7-8def89c20c8e" (UID: "33d56e9c-416a-4816-81a7-8def89c20c8e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.284056 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "33d56e9c-416a-4816-81a7-8def89c20c8e" (UID: "33d56e9c-416a-4816-81a7-8def89c20c8e"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.284373 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "33d56e9c-416a-4816-81a7-8def89c20c8e" (UID: "33d56e9c-416a-4816-81a7-8def89c20c8e"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.288568 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/33d56e9c-416a-4816-81a7-8def89c20c8e-kube-api-access-fb78m" (OuterVolumeSpecName: "kube-api-access-fb78m") pod "33d56e9c-416a-4816-81a7-8def89c20c8e" (UID: "33d56e9c-416a-4816-81a7-8def89c20c8e"). InnerVolumeSpecName "kube-api-access-fb78m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.292309 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "mysql-db") pod "33d56e9c-416a-4816-81a7-8def89c20c8e" (UID: "33d56e9c-416a-4816-81a7-8def89c20c8e"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.307221 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "33d56e9c-416a-4816-81a7-8def89c20c8e" (UID: "33d56e9c-416a-4816-81a7-8def89c20c8e"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.334951 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-galera-tls-certs" (OuterVolumeSpecName: "galera-tls-certs") pod "33d56e9c-416a-4816-81a7-8def89c20c8e" (UID: "33d56e9c-416a-4816-81a7-8def89c20c8e"). InnerVolumeSpecName "galera-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.383853 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.383970 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.383982 4804 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.383990 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fb78m\" (UniqueName: \"kubernetes.io/projected/33d56e9c-416a-4816-81a7-8def89c20c8e-kube-api-access-fb78m\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.384001 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.384009 4804 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.384016 4804 reconciler_common.go:293] "Volume detached for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/33d56e9c-416a-4816-81a7-8def89c20c8e-galera-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.384025 4804 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/33d56e9c-416a-4816-81a7-8def89c20c8e-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.398249 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.471922 4804 generic.go:334] "Generic (PLEG): container finished" podID="33d56e9c-416a-4816-81a7-8def89c20c8e" containerID="5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361" exitCode=0 Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.471964 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"33d56e9c-416a-4816-81a7-8def89c20c8e","Type":"ContainerDied","Data":"5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361"} Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.472017 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"33d56e9c-416a-4816-81a7-8def89c20c8e","Type":"ContainerDied","Data":"a6f77cd6c96b39492fe76acbd919310cca2dbd61ed6cf94d721e54f9cb0227d1"} Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.472057 4804 scope.go:117] "RemoveContainer" containerID="5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.472196 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.485977 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.506957 4804 scope.go:117] "RemoveContainer" containerID="111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.509466 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.514657 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/openstack-galera-0"] Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.526445 4804 scope.go:117] "RemoveContainer" containerID="5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361" Jan 28 11:45:58 crc kubenswrapper[4804]: E0128 11:45:58.526947 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361\": container with ID starting with 5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361 not found: ID does not exist" containerID="5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.527035 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361"} err="failed to get container status \"5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361\": rpc error: code = NotFound desc = could not find container \"5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361\": container with ID starting with 5e794622251477fe10b3a3fe8bcd7f3e9635629894d37a0f7ac33e9a6a339361 not found: ID does not exist" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.527108 4804 scope.go:117] "RemoveContainer" containerID="111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340" Jan 28 11:45:58 crc kubenswrapper[4804]: E0128 11:45:58.527599 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340\": container with ID starting with 111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340 not found: ID does not exist" containerID="111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.527639 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340"} err="failed to get container status \"111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340\": rpc error: code = NotFound desc = could not find container \"111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340\": container with ID starting with 111c77dfddd53dd36ed026d28b3850532644a4ec72ca2e2679381fcc9dbb8340 not found: ID does not exist" Jan 28 11:45:58 crc kubenswrapper[4804]: I0128 11:45:58.924013 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="33d56e9c-416a-4816-81a7-8def89c20c8e" path="/var/lib/kubelet/pods/33d56e9c-416a-4816-81a7-8def89c20c8e/volumes" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.067625 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pfzkj_9d301959-ed06-4b22-8e97-f3fc9a9bc491/ovs-vswitchd/0.log" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.068748 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139017 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d301959-ed06-4b22-8e97-f3fc9a9bc491-scripts\") pod \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139061 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-run\") pod \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139113 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djwbx\" (UniqueName: \"kubernetes.io/projected/9d301959-ed06-4b22-8e97-f3fc9a9bc491-kube-api-access-djwbx\") pod \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139139 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-lib\") pod \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139161 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-run" (OuterVolumeSpecName: "var-run") pod "9d301959-ed06-4b22-8e97-f3fc9a9bc491" (UID: "9d301959-ed06-4b22-8e97-f3fc9a9bc491"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139245 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-etc-ovs\") pod \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139283 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-lib" (OuterVolumeSpecName: "var-lib") pod "9d301959-ed06-4b22-8e97-f3fc9a9bc491" (UID: "9d301959-ed06-4b22-8e97-f3fc9a9bc491"). InnerVolumeSpecName "var-lib". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139301 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-log\") pod \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\" (UID: \"9d301959-ed06-4b22-8e97-f3fc9a9bc491\") " Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139307 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-etc-ovs" (OuterVolumeSpecName: "etc-ovs") pod "9d301959-ed06-4b22-8e97-f3fc9a9bc491" (UID: "9d301959-ed06-4b22-8e97-f3fc9a9bc491"). InnerVolumeSpecName "etc-ovs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139392 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-log" (OuterVolumeSpecName: "var-log") pod "9d301959-ed06-4b22-8e97-f3fc9a9bc491" (UID: "9d301959-ed06-4b22-8e97-f3fc9a9bc491"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139680 4804 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-run\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139691 4804 reconciler_common.go:293] "Volume detached for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-lib\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139700 4804 reconciler_common.go:293] "Volume detached for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-etc-ovs\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.139709 4804 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9d301959-ed06-4b22-8e97-f3fc9a9bc491-var-log\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.140207 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d301959-ed06-4b22-8e97-f3fc9a9bc491-scripts" (OuterVolumeSpecName: "scripts") pod "9d301959-ed06-4b22-8e97-f3fc9a9bc491" (UID: "9d301959-ed06-4b22-8e97-f3fc9a9bc491"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.144107 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d301959-ed06-4b22-8e97-f3fc9a9bc491-kube-api-access-djwbx" (OuterVolumeSpecName: "kube-api-access-djwbx") pod "9d301959-ed06-4b22-8e97-f3fc9a9bc491" (UID: "9d301959-ed06-4b22-8e97-f3fc9a9bc491"). InnerVolumeSpecName "kube-api-access-djwbx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.240614 4804 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9d301959-ed06-4b22-8e97-f3fc9a9bc491-scripts\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.240653 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djwbx\" (UniqueName: \"kubernetes.io/projected/9d301959-ed06-4b22-8e97-f3fc9a9bc491-kube-api-access-djwbx\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.522536 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-pfzkj_9d301959-ed06-4b22-8e97-f3fc9a9bc491/ovs-vswitchd/0.log" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.523401 4804 generic.go:334] "Generic (PLEG): container finished" podID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" exitCode=137 Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.523438 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pfzkj" event={"ID":"9d301959-ed06-4b22-8e97-f3fc9a9bc491","Type":"ContainerDied","Data":"27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067"} Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.523464 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-pfzkj" event={"ID":"9d301959-ed06-4b22-8e97-f3fc9a9bc491","Type":"ContainerDied","Data":"2ef238b63ba108007593ebb8599aaea3fae02c4b5040dd8085355ce0141a6ab3"} Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.523481 4804 scope.go:117] "RemoveContainer" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.523511 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-pfzkj" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.608442 4804 scope.go:117] "RemoveContainer" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.608617 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-ovs-pfzkj"] Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.614657 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-ovs-pfzkj"] Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.630987 4804 scope.go:117] "RemoveContainer" containerID="67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.656563 4804 scope.go:117] "RemoveContainer" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" Jan 28 11:46:02 crc kubenswrapper[4804]: E0128 11:46:02.657587 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067\": container with ID starting with 27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067 not found: ID does not exist" containerID="27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.657627 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067"} err="failed to get container status \"27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067\": rpc error: code = NotFound desc = could not find container \"27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067\": container with ID starting with 27251d001061a6b29736040e8396638a313e97d5b4f08878071cc520714f4067 not found: ID does not exist" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.657653 4804 scope.go:117] "RemoveContainer" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" Jan 28 11:46:02 crc kubenswrapper[4804]: E0128 11:46:02.657908 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784\": container with ID starting with b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 not found: ID does not exist" containerID="b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.657932 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784"} err="failed to get container status \"b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784\": rpc error: code = NotFound desc = could not find container \"b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784\": container with ID starting with b8725c5b55908b2323120a3191de88e69bc4c1051f5a713dd003326ed1466784 not found: ID does not exist" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.657950 4804 scope.go:117] "RemoveContainer" containerID="67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d" Jan 28 11:46:02 crc kubenswrapper[4804]: E0128 11:46:02.658173 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d\": container with ID starting with 67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d not found: ID does not exist" containerID="67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.658198 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d"} err="failed to get container status \"67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d\": rpc error: code = NotFound desc = could not find container \"67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d\": container with ID starting with 67b0d91c10e53018db2af7ce2c41ae5d1ca025c9dfacd3761b22578732f5e55d not found: ID does not exist" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.922605 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" path="/var/lib/kubelet/pods/9d301959-ed06-4b22-8e97-f3fc9a9bc491/volumes" Jan 28 11:46:02 crc kubenswrapper[4804]: I0128 11:46:02.945277 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.051558 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-cache\") pod \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.051727 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.051822 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-lock\") pod \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.051844 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-combined-ca-bundle\") pod \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.051869 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift\") pod \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.051939 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t2q8t\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-kube-api-access-t2q8t\") pod \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\" (UID: \"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc\") " Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.052344 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-cache" (OuterVolumeSpecName: "cache") pod "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" (UID: "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.052407 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-lock" (OuterVolumeSpecName: "lock") pod "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" (UID: "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.058262 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "swift") pod "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" (UID: "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.058426 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" (UID: "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.059801 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-kube-api-access-t2q8t" (OuterVolumeSpecName: "kube-api-access-t2q8t") pod "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" (UID: "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc"). InnerVolumeSpecName "kube-api-access-t2q8t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.153371 4804 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.153516 4804 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-lock\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.153528 4804 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.153538 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t2q8t\" (UniqueName: \"kubernetes.io/projected/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-kube-api-access-t2q8t\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.153550 4804 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-cache\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.167762 4804 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.254336 4804 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.298282 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" (UID: "f452e749-06e2-4b9c-a4d7-8a63ccd07cfc"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.355443 4804 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.544156 4804 generic.go:334] "Generic (PLEG): container finished" podID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerID="3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb" exitCode=137 Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.544195 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb"} Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.544244 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/swift-storage-0" event={"ID":"f452e749-06e2-4b9c-a4d7-8a63ccd07cfc","Type":"ContainerDied","Data":"4a5bec567872839575faf98626366f5cc236d0134aa37c746f2c87478bb70e91"} Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.544250 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/swift-storage-0" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.544263 4804 scope.go:117] "RemoveContainer" containerID="3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.572042 4804 scope.go:117] "RemoveContainer" containerID="a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.578632 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/swift-storage-0"] Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.586496 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/swift-storage-0"] Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.610976 4804 scope.go:117] "RemoveContainer" containerID="43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.629522 4804 scope.go:117] "RemoveContainer" containerID="02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.645856 4804 scope.go:117] "RemoveContainer" containerID="88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.660172 4804 scope.go:117] "RemoveContainer" containerID="f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.676524 4804 scope.go:117] "RemoveContainer" containerID="5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.691918 4804 scope.go:117] "RemoveContainer" containerID="e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.710828 4804 scope.go:117] "RemoveContainer" containerID="ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.731230 4804 scope.go:117] "RemoveContainer" containerID="fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.750047 4804 scope.go:117] "RemoveContainer" containerID="a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.772542 4804 scope.go:117] "RemoveContainer" containerID="a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.791500 4804 scope.go:117] "RemoveContainer" containerID="2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.813844 4804 scope.go:117] "RemoveContainer" containerID="1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.830729 4804 scope.go:117] "RemoveContainer" containerID="c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.848695 4804 scope.go:117] "RemoveContainer" containerID="3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.849170 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb\": container with ID starting with 3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb not found: ID does not exist" containerID="3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.849209 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb"} err="failed to get container status \"3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb\": rpc error: code = NotFound desc = could not find container \"3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb\": container with ID starting with 3271f886ef30f5d6c4fa399a56bd095b93fadf3b8666ebe26b103bb6d281dfeb not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.849235 4804 scope.go:117] "RemoveContainer" containerID="a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.849526 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b\": container with ID starting with a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b not found: ID does not exist" containerID="a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.849557 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b"} err="failed to get container status \"a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b\": rpc error: code = NotFound desc = could not find container \"a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b\": container with ID starting with a25db8a6f9c421eec15bde91e5c2be3c905af97e9a827318ba5736399b2dac1b not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.849589 4804 scope.go:117] "RemoveContainer" containerID="43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.849937 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e\": container with ID starting with 43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e not found: ID does not exist" containerID="43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.849967 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e"} err="failed to get container status \"43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e\": rpc error: code = NotFound desc = could not find container \"43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e\": container with ID starting with 43217138bcc256827db237f0affef8cf721e8ee68be2ac6f0a6a56ce15e8729e not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.849985 4804 scope.go:117] "RemoveContainer" containerID="02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.850216 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5\": container with ID starting with 02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5 not found: ID does not exist" containerID="02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.850242 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5"} err="failed to get container status \"02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5\": rpc error: code = NotFound desc = could not find container \"02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5\": container with ID starting with 02b9f794dcc62693a27b9c9d97188ba9d3eaae0a76ef2e0e81fd98f4fb4b3dd5 not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.850261 4804 scope.go:117] "RemoveContainer" containerID="88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.850554 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3\": container with ID starting with 88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3 not found: ID does not exist" containerID="88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.850584 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3"} err="failed to get container status \"88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3\": rpc error: code = NotFound desc = could not find container \"88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3\": container with ID starting with 88bb024776cdd5e6c32c0049425db15340c324467a1ab1b21e95154b5a375dc3 not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.850602 4804 scope.go:117] "RemoveContainer" containerID="f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.850854 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d\": container with ID starting with f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d not found: ID does not exist" containerID="f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.850901 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d"} err="failed to get container status \"f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d\": rpc error: code = NotFound desc = could not find container \"f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d\": container with ID starting with f140547ceea2ce655a561b4446eece577ef76c816b5b44b6ba30a5f84dffb62d not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.850921 4804 scope.go:117] "RemoveContainer" containerID="5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.851303 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16\": container with ID starting with 5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16 not found: ID does not exist" containerID="5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.851331 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16"} err="failed to get container status \"5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16\": rpc error: code = NotFound desc = could not find container \"5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16\": container with ID starting with 5fc6b82e95588e3c67bd417750ff6e8865c6de4f74048e228cf3ec7e3a916f16 not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.851346 4804 scope.go:117] "RemoveContainer" containerID="e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.851627 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657\": container with ID starting with e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657 not found: ID does not exist" containerID="e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.851681 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657"} err="failed to get container status \"e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657\": rpc error: code = NotFound desc = could not find container \"e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657\": container with ID starting with e85916efa9e5325c2ad2c75fd6b9377a835604797e2e51a120ffe0c3d6be5657 not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.851719 4804 scope.go:117] "RemoveContainer" containerID="ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.852062 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161\": container with ID starting with ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161 not found: ID does not exist" containerID="ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.852093 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161"} err="failed to get container status \"ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161\": rpc error: code = NotFound desc = could not find container \"ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161\": container with ID starting with ed988f657b3f2e5ae46fa4bea6c788ac3c92b799e1cd10fa208a31f97d3c1161 not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.852113 4804 scope.go:117] "RemoveContainer" containerID="fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.852361 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6\": container with ID starting with fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6 not found: ID does not exist" containerID="fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.852388 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6"} err="failed to get container status \"fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6\": rpc error: code = NotFound desc = could not find container \"fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6\": container with ID starting with fd4102af663fb58787a9a276001861af0fde0510825337b4cf7956aebc0f63e6 not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.852402 4804 scope.go:117] "RemoveContainer" containerID="a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.852849 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20\": container with ID starting with a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20 not found: ID does not exist" containerID="a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.852876 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20"} err="failed to get container status \"a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20\": rpc error: code = NotFound desc = could not find container \"a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20\": container with ID starting with a4728cbbd251059d0d3addb27abcfa94bf41fc7c22e237dea38d2fbd3904cd20 not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.852908 4804 scope.go:117] "RemoveContainer" containerID="a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.853128 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5\": container with ID starting with a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5 not found: ID does not exist" containerID="a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.853159 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5"} err="failed to get container status \"a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5\": rpc error: code = NotFound desc = could not find container \"a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5\": container with ID starting with a8e14ca77d7c8fd18f3924dc3da7e4b091f09d8b1ff5200c8fee855b2658d7b5 not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.853183 4804 scope.go:117] "RemoveContainer" containerID="2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.853397 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a\": container with ID starting with 2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a not found: ID does not exist" containerID="2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.853425 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a"} err="failed to get container status \"2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a\": rpc error: code = NotFound desc = could not find container \"2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a\": container with ID starting with 2c2314deed0e590e76fe04a80d9ccfc37a544fe41a188da4ec8472aeb6505e5a not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.853445 4804 scope.go:117] "RemoveContainer" containerID="1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.853699 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb\": container with ID starting with 1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb not found: ID does not exist" containerID="1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.853727 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb"} err="failed to get container status \"1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb\": rpc error: code = NotFound desc = could not find container \"1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb\": container with ID starting with 1fe16309afd893c909e07baf33a36c266198f7808e910ea1e6aa7c01614f6fcb not found: ID does not exist" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.853745 4804 scope.go:117] "RemoveContainer" containerID="c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf" Jan 28 11:46:03 crc kubenswrapper[4804]: E0128 11:46:03.854083 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf\": container with ID starting with c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf not found: ID does not exist" containerID="c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf" Jan 28 11:46:03 crc kubenswrapper[4804]: I0128 11:46:03.854114 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf"} err="failed to get container status \"c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf\": rpc error: code = NotFound desc = could not find container \"c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf\": container with ID starting with c95bce2781ceba6739be4984b791d627b1a653c4f5f17c047464bb526f46fcdf not found: ID does not exist" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.533612 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-542mk"] Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540014 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="sg-core" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540037 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="sg-core" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540052 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerName="nova-api-log" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540059 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerName="nova-api-log" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540070 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="095bc753-88c4-456c-a3ae-aa0040a76338" containerName="neutron-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540078 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="095bc753-88c4-456c-a3ae-aa0040a76338" containerName="neutron-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540092 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5198da96-d6b6-4b80-bb93-838dff10730e" containerName="glance-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540099 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="5198da96-d6b6-4b80-bb93-838dff10730e" containerName="glance-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540110 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="ceilometer-notification-agent" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540118 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="ceilometer-notification-agent" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540130 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33d56e9c-416a-4816-81a7-8def89c20c8e" containerName="galera" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540138 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="33d56e9c-416a-4816-81a7-8def89c20c8e" containerName="galera" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540152 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovs-vswitchd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540159 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovs-vswitchd" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540172 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edcdd787-6628-49ee-abcf-0146c096f547" containerName="openstack-network-exporter" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540179 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="edcdd787-6628-49ee-abcf-0146c096f547" containerName="openstack-network-exporter" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540189 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82ef8b43-de59-45f8-9c2a-765c5709054b" containerName="barbican-keystone-listener-log" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540197 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="82ef8b43-de59-45f8-9c2a-765c5709054b" containerName="barbican-keystone-listener-log" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540208 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edcdd787-6628-49ee-abcf-0146c096f547" containerName="ovn-northd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540215 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="edcdd787-6628-49ee-abcf-0146c096f547" containerName="ovn-northd" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540226 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerName="nova-api-api" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540234 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerName="nova-api-api" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540245 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6af777f5-5dfc-4f4d-b7c5-dd0de3f80def" containerName="kube-state-metrics" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540253 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="6af777f5-5dfc-4f4d-b7c5-dd0de3f80def" containerName="kube-state-metrics" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540262 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-updater" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540268 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-updater" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540281 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="82ef8b43-de59-45f8-9c2a-765c5709054b" containerName="barbican-keystone-listener" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540288 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="82ef8b43-de59-45f8-9c2a-765c5709054b" containerName="barbican-keystone-listener" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540301 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d127f1-97d9-4552-9bdb-b3482a45951d" containerName="rabbitmq" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540309 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d127f1-97d9-4552-9bdb-b3482a45951d" containerName="rabbitmq" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540323 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-auditor" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540330 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-auditor" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540343 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec6a5a02-2cbe-421b-bcf5-54572e000f28" containerName="ovn-controller" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540350 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec6a5a02-2cbe-421b-bcf5-54572e000f28" containerName="ovn-controller" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540358 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server-init" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540365 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server-init" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540374 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" containerName="glance-log" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540382 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" containerName="glance-log" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540395 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" containerName="mariadb-account-create-update" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540403 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" containerName="mariadb-account-create-update" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540414 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" containerName="mariadb-account-create-update" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540422 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" containerName="mariadb-account-create-update" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540436 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="rsync" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540444 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="rsync" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540453 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d47089ce-8b52-4bd3-a30e-04736fed01fc" containerName="memcached" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540461 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d47089ce-8b52-4bd3-a30e-04736fed01fc" containerName="memcached" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540472 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c5c969-c4c2-4f76-b3c6-152473159e78" containerName="rabbitmq" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540479 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c5c969-c4c2-4f76-b3c6-152473159e78" containerName="rabbitmq" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540490 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="proxy-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540497 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="proxy-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540506 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-replicator" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540514 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-replicator" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540526 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-server" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540535 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-server" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540544 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" containerName="glance-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540551 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" containerName="glance-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540559 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e88e9db-b96d-4009-a4e6-ccbb5be53f85" containerName="nova-cell1-conductor-conductor" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540567 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e88e9db-b96d-4009-a4e6-ccbb5be53f85" containerName="nova-cell1-conductor-conductor" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540578 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-server" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540585 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-server" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540597 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="76d127f1-97d9-4552-9bdb-b3482a45951d" containerName="setup-container" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540605 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="76d127f1-97d9-4552-9bdb-b3482a45951d" containerName="setup-container" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540618 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7c5c969-c4c2-4f76-b3c6-152473159e78" containerName="setup-container" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540624 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7c5c969-c4c2-4f76-b3c6-152473159e78" containerName="setup-container" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540636 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="ceilometer-central-agent" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540647 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="ceilometer-central-agent" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540658 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="469a0049-480f-4cde-848d-4b11cb54130b" containerName="nova-scheduler-scheduler" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540666 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="469a0049-480f-4cde-848d-4b11cb54130b" containerName="nova-scheduler-scheduler" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540675 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-replicator" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540683 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-replicator" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540693 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540701 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540713 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-auditor" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540721 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-auditor" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540733 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-reaper" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540740 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-reaper" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540753 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="swift-recon-cron" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540761 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="swift-recon-cron" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540772 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878daeff-34bf-4dab-8118-e42c318849bb" containerName="barbican-worker" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540780 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="878daeff-34bf-4dab-8118-e42c318849bb" containerName="barbican-worker" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540792 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5198da96-d6b6-4b80-bb93-838dff10730e" containerName="glance-log" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540799 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="5198da96-d6b6-4b80-bb93-838dff10730e" containerName="glance-log" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540828 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="095bc753-88c4-456c-a3ae-aa0040a76338" containerName="neutron-api" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540836 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="095bc753-88c4-456c-a3ae-aa0040a76338" containerName="neutron-api" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540843 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-auditor" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540848 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-auditor" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540856 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4efe85dc-b64c-4cbe-83f7-89fa462a95a0" containerName="keystone-api" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540862 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="4efe85dc-b64c-4cbe-83f7-89fa462a95a0" containerName="keystone-api" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540869 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-server" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540874 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-server" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540913 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-expirer" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540921 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-expirer" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540931 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-replicator" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540938 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-replicator" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540947 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="878daeff-34bf-4dab-8118-e42c318849bb" containerName="barbican-worker-log" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540956 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="878daeff-34bf-4dab-8118-e42c318849bb" containerName="barbican-worker-log" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540964 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-updater" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540971 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-updater" Jan 28 11:46:04 crc kubenswrapper[4804]: E0128 11:46:04.540982 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="33d56e9c-416a-4816-81a7-8def89c20c8e" containerName="mysql-bootstrap" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.540988 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="33d56e9c-416a-4816-81a7-8def89c20c8e" containerName="mysql-bootstrap" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541492 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-auditor" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541514 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="469a0049-480f-4cde-848d-4b11cb54130b" containerName="nova-scheduler-scheduler" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541549 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-updater" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541566 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="d47089ce-8b52-4bd3-a30e-04736fed01fc" containerName="memcached" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541577 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="76d127f1-97d9-4552-9bdb-b3482a45951d" containerName="rabbitmq" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541585 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec6a5a02-2cbe-421b-bcf5-54572e000f28" containerName="ovn-controller" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541596 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="ceilometer-central-agent" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541605 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" containerName="glance-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541641 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-expirer" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541653 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerName="nova-api-api" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541967 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="5198da96-d6b6-4b80-bb93-838dff10730e" containerName="glance-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541986 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="sg-core" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.541994 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" containerName="mariadb-account-create-update" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542006 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-server" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542018 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="edcdd787-6628-49ee-abcf-0146c096f547" containerName="ovn-northd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542127 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-reaper" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542142 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="5198da96-d6b6-4b80-bb93-838dff10730e" containerName="glance-log" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542154 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="878daeff-34bf-4dab-8118-e42c318849bb" containerName="barbican-worker" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542165 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-replicator" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542204 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="swift-recon-cron" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542492 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="ceilometer-notification-agent" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542512 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovsdb-server" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542661 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="82ef8b43-de59-45f8-9c2a-765c5709054b" containerName="barbican-keystone-listener-log" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542676 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="878daeff-34bf-4dab-8118-e42c318849bb" containerName="barbican-worker-log" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542691 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="edcdd787-6628-49ee-abcf-0146c096f547" containerName="openstack-network-exporter" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542792 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="be9960aa-e7e6-4408-bad6-3eb1ff4ee2e8" containerName="mariadb-account-create-update" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542845 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-server" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542858 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-auditor" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.542870 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="ae0fb199-797a-40c6-8c71-3b5a976b6c61" containerName="nova-api-log" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543027 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="095bc753-88c4-456c-a3ae-aa0040a76338" containerName="neutron-api" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543037 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-auditor" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543045 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="82ef8b43-de59-45f8-9c2a-765c5709054b" containerName="barbican-keystone-listener" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543053 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f5cdaa9-8b1d-44b2-bfe6-d986f680327f" containerName="glance-log" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543062 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="90f5a2ef-6224-4af8-8bba-32c689a960f1" containerName="proxy-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543185 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d301959-ed06-4b22-8e97-f3fc9a9bc491" containerName="ovs-vswitchd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543200 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-updater" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543207 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="object-replicator" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543296 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="095bc753-88c4-456c-a3ae-aa0040a76338" containerName="neutron-httpd" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543325 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="container-replicator" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543336 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="6af777f5-5dfc-4f4d-b7c5-dd0de3f80def" containerName="kube-state-metrics" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543344 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e88e9db-b96d-4009-a4e6-ccbb5be53f85" containerName="nova-cell1-conductor-conductor" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543542 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7c5c969-c4c2-4f76-b3c6-152473159e78" containerName="rabbitmq" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543552 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="33d56e9c-416a-4816-81a7-8def89c20c8e" containerName="galera" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543561 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="account-server" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543571 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" containerName="rsync" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.543585 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="4efe85dc-b64c-4cbe-83f7-89fa462a95a0" containerName="keystone-api" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.545626 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.555217 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-542mk"] Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.584872 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwlwl\" (UniqueName: \"kubernetes.io/projected/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-kube-api-access-xwlwl\") pod \"redhat-operators-542mk\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.584947 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-catalog-content\") pod \"redhat-operators-542mk\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.585086 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-utilities\") pod \"redhat-operators-542mk\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.686771 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-utilities\") pod \"redhat-operators-542mk\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.686848 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwlwl\" (UniqueName: \"kubernetes.io/projected/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-kube-api-access-xwlwl\") pod \"redhat-operators-542mk\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.686906 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-catalog-content\") pod \"redhat-operators-542mk\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.687515 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-utilities\") pod \"redhat-operators-542mk\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.687525 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-catalog-content\") pod \"redhat-operators-542mk\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.718373 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwlwl\" (UniqueName: \"kubernetes.io/projected/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-kube-api-access-xwlwl\") pod \"redhat-operators-542mk\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.899715 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:04 crc kubenswrapper[4804]: I0128 11:46:04.925961 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f452e749-06e2-4b9c-a4d7-8a63ccd07cfc" path="/var/lib/kubelet/pods/f452e749-06e2-4b9c-a4d7-8a63ccd07cfc/volumes" Jan 28 11:46:05 crc kubenswrapper[4804]: I0128 11:46:05.325487 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-542mk"] Jan 28 11:46:05 crc kubenswrapper[4804]: W0128 11:46:05.331590 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97cb07bd_2024_4cd4_aed6_86ccdfcf50b5.slice/crio-09e0a1a7b843aecaf209e41ada098ba9a5b364e573b5992cfe1625b0b95ef441 WatchSource:0}: Error finding container 09e0a1a7b843aecaf209e41ada098ba9a5b364e573b5992cfe1625b0b95ef441: Status 404 returned error can't find the container with id 09e0a1a7b843aecaf209e41ada098ba9a5b364e573b5992cfe1625b0b95ef441 Jan 28 11:46:05 crc kubenswrapper[4804]: I0128 11:46:05.609386 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-542mk" event={"ID":"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5","Type":"ContainerStarted","Data":"4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15"} Jan 28 11:46:05 crc kubenswrapper[4804]: I0128 11:46:05.609845 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-542mk" event={"ID":"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5","Type":"ContainerStarted","Data":"09e0a1a7b843aecaf209e41ada098ba9a5b364e573b5992cfe1625b0b95ef441"} Jan 28 11:46:06 crc kubenswrapper[4804]: I0128 11:46:06.619631 4804 generic.go:334] "Generic (PLEG): container finished" podID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerID="4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15" exitCode=0 Jan 28 11:46:06 crc kubenswrapper[4804]: I0128 11:46:06.619685 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-542mk" event={"ID":"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5","Type":"ContainerDied","Data":"4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15"} Jan 28 11:46:08 crc kubenswrapper[4804]: I0128 11:46:08.640581 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-542mk" event={"ID":"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5","Type":"ContainerStarted","Data":"a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070"} Jan 28 11:46:09 crc kubenswrapper[4804]: I0128 11:46:09.648900 4804 generic.go:334] "Generic (PLEG): container finished" podID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerID="a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070" exitCode=0 Jan 28 11:46:09 crc kubenswrapper[4804]: I0128 11:46:09.649165 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-542mk" event={"ID":"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5","Type":"ContainerDied","Data":"a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070"} Jan 28 11:46:10 crc kubenswrapper[4804]: I0128 11:46:10.667696 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-542mk" event={"ID":"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5","Type":"ContainerStarted","Data":"1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17"} Jan 28 11:46:10 crc kubenswrapper[4804]: I0128 11:46:10.690666 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-542mk" podStartSLOduration=2.998582142 podStartE2EDuration="6.690641375s" podCreationTimestamp="2026-01-28 11:46:04 +0000 UTC" firstStartedPulling="2026-01-28 11:46:06.621400054 +0000 UTC m=+1442.416280038" lastFinishedPulling="2026-01-28 11:46:10.313459287 +0000 UTC m=+1446.108339271" observedRunningTime="2026-01-28 11:46:10.688832259 +0000 UTC m=+1446.483712243" watchObservedRunningTime="2026-01-28 11:46:10.690641375 +0000 UTC m=+1446.485521359" Jan 28 11:46:14 crc kubenswrapper[4804]: I0128 11:46:14.900825 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:14 crc kubenswrapper[4804]: I0128 11:46:14.901211 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:15 crc kubenswrapper[4804]: I0128 11:46:15.950309 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-542mk" podUID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerName="registry-server" probeResult="failure" output=< Jan 28 11:46:15 crc kubenswrapper[4804]: timeout: failed to connect service ":50051" within 1s Jan 28 11:46:15 crc kubenswrapper[4804]: > Jan 28 11:46:24 crc kubenswrapper[4804]: I0128 11:46:24.948057 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:24 crc kubenswrapper[4804]: I0128 11:46:24.995760 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:25 crc kubenswrapper[4804]: I0128 11:46:25.187543 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-542mk"] Jan 28 11:46:26 crc kubenswrapper[4804]: I0128 11:46:26.792154 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-542mk" podUID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerName="registry-server" containerID="cri-o://1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17" gracePeriod=2 Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.164628 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.317736 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-catalog-content\") pod \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.317799 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-utilities\") pod \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.317818 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwlwl\" (UniqueName: \"kubernetes.io/projected/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-kube-api-access-xwlwl\") pod \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\" (UID: \"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5\") " Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.319279 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-utilities" (OuterVolumeSpecName: "utilities") pod "97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" (UID: "97cb07bd-2024-4cd4-aed6-86ccdfcf50b5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.324083 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-kube-api-access-xwlwl" (OuterVolumeSpecName: "kube-api-access-xwlwl") pod "97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" (UID: "97cb07bd-2024-4cd4-aed6-86ccdfcf50b5"). InnerVolumeSpecName "kube-api-access-xwlwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.419855 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwlwl\" (UniqueName: \"kubernetes.io/projected/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-kube-api-access-xwlwl\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.419912 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.468287 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" (UID: "97cb07bd-2024-4cd4-aed6-86ccdfcf50b5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.521049 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.801653 4804 generic.go:334] "Generic (PLEG): container finished" podID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerID="1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17" exitCode=0 Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.801693 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-542mk" event={"ID":"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5","Type":"ContainerDied","Data":"1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17"} Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.801720 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-542mk" event={"ID":"97cb07bd-2024-4cd4-aed6-86ccdfcf50b5","Type":"ContainerDied","Data":"09e0a1a7b843aecaf209e41ada098ba9a5b364e573b5992cfe1625b0b95ef441"} Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.801739 4804 scope.go:117] "RemoveContainer" containerID="1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.801846 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-542mk" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.833133 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-542mk"] Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.835856 4804 scope.go:117] "RemoveContainer" containerID="a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.839668 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-542mk"] Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.876844 4804 scope.go:117] "RemoveContainer" containerID="4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.893996 4804 scope.go:117] "RemoveContainer" containerID="1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17" Jan 28 11:46:27 crc kubenswrapper[4804]: E0128 11:46:27.894449 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17\": container with ID starting with 1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17 not found: ID does not exist" containerID="1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.894501 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17"} err="failed to get container status \"1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17\": rpc error: code = NotFound desc = could not find container \"1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17\": container with ID starting with 1660de3184c66badf7191af37757365fde4ee56de0e0c3b6b5920b3efac14e17 not found: ID does not exist" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.894529 4804 scope.go:117] "RemoveContainer" containerID="a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070" Jan 28 11:46:27 crc kubenswrapper[4804]: E0128 11:46:27.894870 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070\": container with ID starting with a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070 not found: ID does not exist" containerID="a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.894926 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070"} err="failed to get container status \"a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070\": rpc error: code = NotFound desc = could not find container \"a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070\": container with ID starting with a5b5082492f3c5a798a9eccd578f818cda427feac6f716544221d87691d45070 not found: ID does not exist" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.894951 4804 scope.go:117] "RemoveContainer" containerID="4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15" Jan 28 11:46:27 crc kubenswrapper[4804]: E0128 11:46:27.895210 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15\": container with ID starting with 4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15 not found: ID does not exist" containerID="4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15" Jan 28 11:46:27 crc kubenswrapper[4804]: I0128 11:46:27.895245 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15"} err="failed to get container status \"4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15\": rpc error: code = NotFound desc = could not find container \"4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15\": container with ID starting with 4c643038045eb56bc7bef71b8291e02d97eb409682b3384a4fb335ecd6fc0b15 not found: ID does not exist" Jan 28 11:46:28 crc kubenswrapper[4804]: I0128 11:46:28.924802 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" path="/var/lib/kubelet/pods/97cb07bd-2024-4cd4-aed6-86ccdfcf50b5/volumes" Jan 28 11:47:12 crc kubenswrapper[4804]: I0128 11:47:12.582557 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:47:12 crc kubenswrapper[4804]: I0128 11:47:12.583172 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.649977 4804 scope.go:117] "RemoveContainer" containerID="7501d75daa32f7ac9da494ff4510c6c7b84e72c6cd5d7a36b873ba97e31ca357" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.679637 4804 scope.go:117] "RemoveContainer" containerID="1b59702421a69d6833edc7663b102672fc847c9132ffeaf19a10a5a8788602d2" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.698114 4804 scope.go:117] "RemoveContainer" containerID="61f6d7d8df2b93d1c2aa1ade5c1c81fe0cb73ba040cbf0a84450d89f676d1c96" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.716993 4804 scope.go:117] "RemoveContainer" containerID="6ce17aece748b9da79e3085fe6d476a5deab47316ec4672ba0cbe650d2deca37" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.741337 4804 scope.go:117] "RemoveContainer" containerID="f3135f22df67a9f998ea737f7764f24294ba0c3f0ee5a1682b6d2623e608a549" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.764812 4804 scope.go:117] "RemoveContainer" containerID="9ebbd370fba6d4ae4e403a102d6071f40119646995ef2452c9e5a36cd8033a5d" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.796691 4804 scope.go:117] "RemoveContainer" containerID="350f3ad47814ad13668216a271a72da43f7b115b973ca0e4f205bd9b83981f82" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.814509 4804 scope.go:117] "RemoveContainer" containerID="083be3913b9cea293776996ed70c579f5b987734d7d6618ce37907eb76d96885" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.833392 4804 scope.go:117] "RemoveContainer" containerID="eb8aeef081bed9fc3291d5cfeded1565dd1b1b9b2083d0292898d1582434080f" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.848855 4804 scope.go:117] "RemoveContainer" containerID="f7789d2bdd1334c4462a3af29ff8ca19fc4d47aa63dc768208c1612ddcee666a" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.863469 4804 scope.go:117] "RemoveContainer" containerID="91137eb6aeea940f4af2b3e77f249fa514f8d6f12484bb39c0b7af92b6cead6f" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.885951 4804 scope.go:117] "RemoveContainer" containerID="5575fa4ddc8773670c0f493f88df21ff86a53d01b7736599cdb3fe2b123bacad" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.901962 4804 scope.go:117] "RemoveContainer" containerID="0e142e02c8a274046814a6325bfd4965bb106ee5efa7e215372b93e33be734e4" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.918241 4804 scope.go:117] "RemoveContainer" containerID="acc629a29baa94b90886caa052a9712308190fcbd858f031b8ca85b990fe85e5" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.935227 4804 scope.go:117] "RemoveContainer" containerID="07d005b2c14a47d4da694ee14fd26759eafe1775650f3812e43c2a15c848c61f" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.953102 4804 scope.go:117] "RemoveContainer" containerID="33b6a6135853b57c0111bf580d3d2c2cfc12a6ddcba054451c960f37e0cda40d" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.970847 4804 scope.go:117] "RemoveContainer" containerID="445cd2aea23cf7159b1e5bbce268d62cd1c9a1d5072f21d98a9181a420bf2e56" Jan 28 11:47:19 crc kubenswrapper[4804]: I0128 11:47:19.990932 4804 scope.go:117] "RemoveContainer" containerID="1458a9f0fdf6329fef09a5d8735c3d60b67ac3518f533ed20b00b17805f5df6e" Jan 28 11:47:20 crc kubenswrapper[4804]: I0128 11:47:20.006552 4804 scope.go:117] "RemoveContainer" containerID="647b1f190be0e34804a1719e55a8c2587f822eeb47af8070a4c99ed681d8f789" Jan 28 11:47:20 crc kubenswrapper[4804]: I0128 11:47:20.024269 4804 scope.go:117] "RemoveContainer" containerID="90654b28f7b1bc46ccc040db22917c371a0f4ddcc12c4c2ea186a6c9f6f7e0b1" Jan 28 11:47:20 crc kubenswrapper[4804]: I0128 11:47:20.046510 4804 scope.go:117] "RemoveContainer" containerID="afc5376aa5a4fb69874f078b35845b9a204c99fa74239aab619e23b2ca9f242b" Jan 28 11:47:20 crc kubenswrapper[4804]: I0128 11:47:20.068998 4804 scope.go:117] "RemoveContainer" containerID="630d245e2b53140749f6a43e742aa23a22cf07e20dff45a1938f861c8866cefa" Jan 28 11:47:20 crc kubenswrapper[4804]: I0128 11:47:20.100688 4804 scope.go:117] "RemoveContainer" containerID="17b7bc7812de15b0ba6dad22d3ba3bb61255869891da2c8a992a0d46bd5333d8" Jan 28 11:47:42 crc kubenswrapper[4804]: I0128 11:47:42.581860 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:47:42 crc kubenswrapper[4804]: I0128 11:47:42.582335 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:48:12 crc kubenswrapper[4804]: I0128 11:48:12.581987 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:48:12 crc kubenswrapper[4804]: I0128 11:48:12.582604 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:48:12 crc kubenswrapper[4804]: I0128 11:48:12.582645 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:48:12 crc kubenswrapper[4804]: I0128 11:48:12.583247 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 11:48:12 crc kubenswrapper[4804]: I0128 11:48:12.583300 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" gracePeriod=600 Jan 28 11:48:13 crc kubenswrapper[4804]: E0128 11:48:13.330271 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:48:13 crc kubenswrapper[4804]: I0128 11:48:13.632852 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" exitCode=0 Jan 28 11:48:13 crc kubenswrapper[4804]: I0128 11:48:13.632968 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d"} Jan 28 11:48:13 crc kubenswrapper[4804]: I0128 11:48:13.633084 4804 scope.go:117] "RemoveContainer" containerID="4bffdd4d5a4ad0d46a47b95458a7c8bdaf05a4c4019b6b412dce10eb63d37e95" Jan 28 11:48:13 crc kubenswrapper[4804]: I0128 11:48:13.633666 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:48:13 crc kubenswrapper[4804]: E0128 11:48:13.634046 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:48:20 crc kubenswrapper[4804]: I0128 11:48:20.354919 4804 scope.go:117] "RemoveContainer" containerID="c678cbe047e0072936e6685fda5e2cdde34f1bc266bf8023e6e395194b174396" Jan 28 11:48:20 crc kubenswrapper[4804]: I0128 11:48:20.395163 4804 scope.go:117] "RemoveContainer" containerID="141148b29896e3f2f9d12c3faec258d3e962851d2411ef8203fd3511f78f472c" Jan 28 11:48:20 crc kubenswrapper[4804]: I0128 11:48:20.416528 4804 scope.go:117] "RemoveContainer" containerID="905c09b793697a4d6c52520b6966a20f7c9e6354b274348d7425039892c0fbb9" Jan 28 11:48:20 crc kubenswrapper[4804]: I0128 11:48:20.463291 4804 scope.go:117] "RemoveContainer" containerID="39f3d9fd533ba3d14095e02fb7f969a867f9aaeea3368bde1bf4f16b61454f75" Jan 28 11:48:20 crc kubenswrapper[4804]: I0128 11:48:20.502144 4804 scope.go:117] "RemoveContainer" containerID="e0578f336cec25aad377224f179ea54ee5afd99b6a706cbe778740c4a7fd261d" Jan 28 11:48:20 crc kubenswrapper[4804]: I0128 11:48:20.537502 4804 scope.go:117] "RemoveContainer" containerID="dc599447325170297407d10ffc4cdfee6dcb5608ba938fdf91f777cfd7556821" Jan 28 11:48:27 crc kubenswrapper[4804]: I0128 11:48:27.915300 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:48:27 crc kubenswrapper[4804]: E0128 11:48:27.916081 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.471380 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-d9v6p"] Jan 28 11:48:33 crc kubenswrapper[4804]: E0128 11:48:33.474377 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerName="extract-content" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.474844 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerName="extract-content" Jan 28 11:48:33 crc kubenswrapper[4804]: E0128 11:48:33.475352 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerName="registry-server" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.475582 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerName="registry-server" Jan 28 11:48:33 crc kubenswrapper[4804]: E0128 11:48:33.475740 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerName="extract-utilities" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.475856 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerName="extract-utilities" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.476402 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="97cb07bd-2024-4cd4-aed6-86ccdfcf50b5" containerName="registry-server" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.482164 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.487997 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d9v6p"] Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.586563 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-utilities\") pod \"certified-operators-d9v6p\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.586621 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-catalog-content\") pod \"certified-operators-d9v6p\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.586651 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnd2f\" (UniqueName: \"kubernetes.io/projected/36dace41-3e60-485b-8a38-7678187e37bc-kube-api-access-dnd2f\") pod \"certified-operators-d9v6p\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.689094 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-utilities\") pod \"certified-operators-d9v6p\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.689539 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-catalog-content\") pod \"certified-operators-d9v6p\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.689590 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnd2f\" (UniqueName: \"kubernetes.io/projected/36dace41-3e60-485b-8a38-7678187e37bc-kube-api-access-dnd2f\") pod \"certified-operators-d9v6p\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.689655 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-utilities\") pod \"certified-operators-d9v6p\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.690093 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-catalog-content\") pod \"certified-operators-d9v6p\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.715925 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnd2f\" (UniqueName: \"kubernetes.io/projected/36dace41-3e60-485b-8a38-7678187e37bc-kube-api-access-dnd2f\") pod \"certified-operators-d9v6p\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:33 crc kubenswrapper[4804]: I0128 11:48:33.818494 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:34 crc kubenswrapper[4804]: I0128 11:48:34.330597 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-d9v6p"] Jan 28 11:48:34 crc kubenswrapper[4804]: I0128 11:48:34.780755 4804 generic.go:334] "Generic (PLEG): container finished" podID="36dace41-3e60-485b-8a38-7678187e37bc" containerID="b66d61510b8ad656b1569bf069b418886e9ed21a3bdd85c1e3a298710f6f8690" exitCode=0 Jan 28 11:48:34 crc kubenswrapper[4804]: I0128 11:48:34.780817 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9v6p" event={"ID":"36dace41-3e60-485b-8a38-7678187e37bc","Type":"ContainerDied","Data":"b66d61510b8ad656b1569bf069b418886e9ed21a3bdd85c1e3a298710f6f8690"} Jan 28 11:48:34 crc kubenswrapper[4804]: I0128 11:48:34.780871 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9v6p" event={"ID":"36dace41-3e60-485b-8a38-7678187e37bc","Type":"ContainerStarted","Data":"23614a20679ce59f15145c1e802dbdc5ecc324238e99d3d474c222adfacf2c91"} Jan 28 11:48:36 crc kubenswrapper[4804]: I0128 11:48:36.794720 4804 generic.go:334] "Generic (PLEG): container finished" podID="36dace41-3e60-485b-8a38-7678187e37bc" containerID="a534b6335d7e816ca1d538f4b54d3a072bcd741400bac357a3abf2cb22bbe601" exitCode=0 Jan 28 11:48:36 crc kubenswrapper[4804]: I0128 11:48:36.795012 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9v6p" event={"ID":"36dace41-3e60-485b-8a38-7678187e37bc","Type":"ContainerDied","Data":"a534b6335d7e816ca1d538f4b54d3a072bcd741400bac357a3abf2cb22bbe601"} Jan 28 11:48:37 crc kubenswrapper[4804]: I0128 11:48:37.804840 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9v6p" event={"ID":"36dace41-3e60-485b-8a38-7678187e37bc","Type":"ContainerStarted","Data":"c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d"} Jan 28 11:48:37 crc kubenswrapper[4804]: I0128 11:48:37.820547 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-d9v6p" podStartSLOduration=2.284818576 podStartE2EDuration="4.820521592s" podCreationTimestamp="2026-01-28 11:48:33 +0000 UTC" firstStartedPulling="2026-01-28 11:48:34.782068162 +0000 UTC m=+1590.576948146" lastFinishedPulling="2026-01-28 11:48:37.317771178 +0000 UTC m=+1593.112651162" observedRunningTime="2026-01-28 11:48:37.818387474 +0000 UTC m=+1593.613267478" watchObservedRunningTime="2026-01-28 11:48:37.820521592 +0000 UTC m=+1593.615401576" Jan 28 11:48:40 crc kubenswrapper[4804]: I0128 11:48:40.915096 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:48:40 crc kubenswrapper[4804]: E0128 11:48:40.915868 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:48:43 crc kubenswrapper[4804]: I0128 11:48:43.818697 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:43 crc kubenswrapper[4804]: I0128 11:48:43.818761 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:43 crc kubenswrapper[4804]: I0128 11:48:43.866217 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:43 crc kubenswrapper[4804]: I0128 11:48:43.911143 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:44 crc kubenswrapper[4804]: I0128 11:48:44.102500 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d9v6p"] Jan 28 11:48:45 crc kubenswrapper[4804]: I0128 11:48:45.861453 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-d9v6p" podUID="36dace41-3e60-485b-8a38-7678187e37bc" containerName="registry-server" containerID="cri-o://c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d" gracePeriod=2 Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.299444 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.384247 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-catalog-content\") pod \"36dace41-3e60-485b-8a38-7678187e37bc\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.384338 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-utilities\") pod \"36dace41-3e60-485b-8a38-7678187e37bc\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.384455 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnd2f\" (UniqueName: \"kubernetes.io/projected/36dace41-3e60-485b-8a38-7678187e37bc-kube-api-access-dnd2f\") pod \"36dace41-3e60-485b-8a38-7678187e37bc\" (UID: \"36dace41-3e60-485b-8a38-7678187e37bc\") " Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.387910 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-utilities" (OuterVolumeSpecName: "utilities") pod "36dace41-3e60-485b-8a38-7678187e37bc" (UID: "36dace41-3e60-485b-8a38-7678187e37bc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.399172 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/36dace41-3e60-485b-8a38-7678187e37bc-kube-api-access-dnd2f" (OuterVolumeSpecName: "kube-api-access-dnd2f") pod "36dace41-3e60-485b-8a38-7678187e37bc" (UID: "36dace41-3e60-485b-8a38-7678187e37bc"). InnerVolumeSpecName "kube-api-access-dnd2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.440752 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "36dace41-3e60-485b-8a38-7678187e37bc" (UID: "36dace41-3e60-485b-8a38-7678187e37bc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.486419 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.486464 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnd2f\" (UniqueName: \"kubernetes.io/projected/36dace41-3e60-485b-8a38-7678187e37bc-kube-api-access-dnd2f\") on node \"crc\" DevicePath \"\"" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.486480 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/36dace41-3e60-485b-8a38-7678187e37bc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.870015 4804 generic.go:334] "Generic (PLEG): container finished" podID="36dace41-3e60-485b-8a38-7678187e37bc" containerID="c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d" exitCode=0 Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.870076 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9v6p" event={"ID":"36dace41-3e60-485b-8a38-7678187e37bc","Type":"ContainerDied","Data":"c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d"} Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.870145 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-d9v6p" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.870383 4804 scope.go:117] "RemoveContainer" containerID="c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.870367 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-d9v6p" event={"ID":"36dace41-3e60-485b-8a38-7678187e37bc","Type":"ContainerDied","Data":"23614a20679ce59f15145c1e802dbdc5ecc324238e99d3d474c222adfacf2c91"} Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.900925 4804 scope.go:117] "RemoveContainer" containerID="a534b6335d7e816ca1d538f4b54d3a072bcd741400bac357a3abf2cb22bbe601" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.912820 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-d9v6p"] Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.930589 4804 scope.go:117] "RemoveContainer" containerID="b66d61510b8ad656b1569bf069b418886e9ed21a3bdd85c1e3a298710f6f8690" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.948852 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-d9v6p"] Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.948981 4804 scope.go:117] "RemoveContainer" containerID="c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d" Jan 28 11:48:46 crc kubenswrapper[4804]: E0128 11:48:46.949418 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d\": container with ID starting with c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d not found: ID does not exist" containerID="c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.949464 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d"} err="failed to get container status \"c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d\": rpc error: code = NotFound desc = could not find container \"c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d\": container with ID starting with c0f4d5f52bbdeeea2d48ce7d0699cda9889d84e7f0c6e5a889a825c08bbf9a5d not found: ID does not exist" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.949496 4804 scope.go:117] "RemoveContainer" containerID="a534b6335d7e816ca1d538f4b54d3a072bcd741400bac357a3abf2cb22bbe601" Jan 28 11:48:46 crc kubenswrapper[4804]: E0128 11:48:46.949783 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a534b6335d7e816ca1d538f4b54d3a072bcd741400bac357a3abf2cb22bbe601\": container with ID starting with a534b6335d7e816ca1d538f4b54d3a072bcd741400bac357a3abf2cb22bbe601 not found: ID does not exist" containerID="a534b6335d7e816ca1d538f4b54d3a072bcd741400bac357a3abf2cb22bbe601" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.949805 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a534b6335d7e816ca1d538f4b54d3a072bcd741400bac357a3abf2cb22bbe601"} err="failed to get container status \"a534b6335d7e816ca1d538f4b54d3a072bcd741400bac357a3abf2cb22bbe601\": rpc error: code = NotFound desc = could not find container \"a534b6335d7e816ca1d538f4b54d3a072bcd741400bac357a3abf2cb22bbe601\": container with ID starting with a534b6335d7e816ca1d538f4b54d3a072bcd741400bac357a3abf2cb22bbe601 not found: ID does not exist" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.949823 4804 scope.go:117] "RemoveContainer" containerID="b66d61510b8ad656b1569bf069b418886e9ed21a3bdd85c1e3a298710f6f8690" Jan 28 11:48:46 crc kubenswrapper[4804]: E0128 11:48:46.950112 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b66d61510b8ad656b1569bf069b418886e9ed21a3bdd85c1e3a298710f6f8690\": container with ID starting with b66d61510b8ad656b1569bf069b418886e9ed21a3bdd85c1e3a298710f6f8690 not found: ID does not exist" containerID="b66d61510b8ad656b1569bf069b418886e9ed21a3bdd85c1e3a298710f6f8690" Jan 28 11:48:46 crc kubenswrapper[4804]: I0128 11:48:46.950281 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b66d61510b8ad656b1569bf069b418886e9ed21a3bdd85c1e3a298710f6f8690"} err="failed to get container status \"b66d61510b8ad656b1569bf069b418886e9ed21a3bdd85c1e3a298710f6f8690\": rpc error: code = NotFound desc = could not find container \"b66d61510b8ad656b1569bf069b418886e9ed21a3bdd85c1e3a298710f6f8690\": container with ID starting with b66d61510b8ad656b1569bf069b418886e9ed21a3bdd85c1e3a298710f6f8690 not found: ID does not exist" Jan 28 11:48:48 crc kubenswrapper[4804]: I0128 11:48:48.925716 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="36dace41-3e60-485b-8a38-7678187e37bc" path="/var/lib/kubelet/pods/36dace41-3e60-485b-8a38-7678187e37bc/volumes" Jan 28 11:48:55 crc kubenswrapper[4804]: I0128 11:48:55.915206 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:48:55 crc kubenswrapper[4804]: E0128 11:48:55.916028 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:49:09 crc kubenswrapper[4804]: I0128 11:49:09.915364 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:49:09 crc kubenswrapper[4804]: E0128 11:49:09.916213 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:49:20 crc kubenswrapper[4804]: I0128 11:49:20.696875 4804 scope.go:117] "RemoveContainer" containerID="4826b18cb81abb4e1ff9ad1e5f7d66bf9704f751e4eaecf9575b178485d52c14" Jan 28 11:49:20 crc kubenswrapper[4804]: I0128 11:49:20.770283 4804 scope.go:117] "RemoveContainer" containerID="942dab2562186e8c843d08a81baf4b10000e2f951efd28dd679bda2d6239dabc" Jan 28 11:49:20 crc kubenswrapper[4804]: I0128 11:49:20.787921 4804 scope.go:117] "RemoveContainer" containerID="4396681344b1f4b062c4d3af20aad6ea83e5895641201a1d6581293d78a469d6" Jan 28 11:49:20 crc kubenswrapper[4804]: I0128 11:49:20.817253 4804 scope.go:117] "RemoveContainer" containerID="1aa2852183ab3447d372d5d5e67a6b2f61d8ddd3d77cfdf97f897ca4044fdfeb" Jan 28 11:49:20 crc kubenswrapper[4804]: I0128 11:49:20.846150 4804 scope.go:117] "RemoveContainer" containerID="00fa4f179f72ae4ed60b5277bb72d034bf25e0316d4ff2c0b245c99e5bbbb1c0" Jan 28 11:49:20 crc kubenswrapper[4804]: I0128 11:49:20.873699 4804 scope.go:117] "RemoveContainer" containerID="75c0ffcb0c025a38e738831b1e54d6accb5a07b7f29d2b3b100a75e69d401044" Jan 28 11:49:20 crc kubenswrapper[4804]: I0128 11:49:20.897158 4804 scope.go:117] "RemoveContainer" containerID="d61b26c6574f005cf741e8617cfd877723c9dba4e0c0da9dc9d5ab35b7c99c44" Jan 28 11:49:21 crc kubenswrapper[4804]: I0128 11:49:21.914343 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:49:21 crc kubenswrapper[4804]: E0128 11:49:21.914598 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:49:36 crc kubenswrapper[4804]: I0128 11:49:36.915827 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:49:36 crc kubenswrapper[4804]: E0128 11:49:36.916724 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:49:48 crc kubenswrapper[4804]: I0128 11:49:48.915675 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:49:48 crc kubenswrapper[4804]: E0128 11:49:48.916580 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:49:59 crc kubenswrapper[4804]: I0128 11:49:59.914851 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:49:59 crc kubenswrapper[4804]: E0128 11:49:59.915866 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:50:12 crc kubenswrapper[4804]: I0128 11:50:12.915782 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:50:12 crc kubenswrapper[4804]: E0128 11:50:12.918970 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:50:21 crc kubenswrapper[4804]: I0128 11:50:21.002608 4804 scope.go:117] "RemoveContainer" containerID="a2eabfea7974e19dcb056faf4aba79a46119c1df2377b8eb64616fb881ba0268" Jan 28 11:50:21 crc kubenswrapper[4804]: I0128 11:50:21.055962 4804 scope.go:117] "RemoveContainer" containerID="14d679b7ac81e4e13ea78d091c6bcc493eebbfb6bcb668dffab054c4661eb685" Jan 28 11:50:23 crc kubenswrapper[4804]: I0128 11:50:23.915207 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:50:23 crc kubenswrapper[4804]: E0128 11:50:23.915760 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:50:36 crc kubenswrapper[4804]: I0128 11:50:36.915628 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:50:36 crc kubenswrapper[4804]: E0128 11:50:36.916406 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:50:51 crc kubenswrapper[4804]: I0128 11:50:51.915367 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:50:51 crc kubenswrapper[4804]: E0128 11:50:51.916110 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:51:04 crc kubenswrapper[4804]: I0128 11:51:04.928513 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:51:04 crc kubenswrapper[4804]: E0128 11:51:04.931397 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:51:15 crc kubenswrapper[4804]: I0128 11:51:15.914589 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:51:15 crc kubenswrapper[4804]: E0128 11:51:15.915080 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:51:21 crc kubenswrapper[4804]: I0128 11:51:21.126194 4804 scope.go:117] "RemoveContainer" containerID="2c2804f6826c0c8a401ed21f9d0d5b1726c6192dce5dc3765fa6bb65769860e7" Jan 28 11:51:21 crc kubenswrapper[4804]: I0128 11:51:21.159312 4804 scope.go:117] "RemoveContainer" containerID="2cf37cb975241a8023292503844e50e2fd76dae6622e27d3a7bdc8476283ee2c" Jan 28 11:51:21 crc kubenswrapper[4804]: I0128 11:51:21.180055 4804 scope.go:117] "RemoveContainer" containerID="91cc51ff2b7594ba6b7c5b83ef291bdad1767dd300aa27e2d6fe9a547161ad93" Jan 28 11:51:21 crc kubenswrapper[4804]: I0128 11:51:21.202519 4804 scope.go:117] "RemoveContainer" containerID="67b5e53f1eb1c67a490461931a62e093efa88d74afa9352d1282f6ea7d2e449a" Jan 28 11:51:30 crc kubenswrapper[4804]: I0128 11:51:30.915132 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:51:30 crc kubenswrapper[4804]: E0128 11:51:30.915788 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:51:44 crc kubenswrapper[4804]: I0128 11:51:44.921507 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:51:44 crc kubenswrapper[4804]: E0128 11:51:44.922295 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:51:57 crc kubenswrapper[4804]: I0128 11:51:57.915577 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:51:57 crc kubenswrapper[4804]: E0128 11:51:57.917253 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:52:10 crc kubenswrapper[4804]: I0128 11:52:10.916038 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:52:10 crc kubenswrapper[4804]: E0128 11:52:10.917271 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:52:22 crc kubenswrapper[4804]: I0128 11:52:22.915321 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:52:22 crc kubenswrapper[4804]: E0128 11:52:22.915989 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:52:36 crc kubenswrapper[4804]: I0128 11:52:36.915899 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:52:36 crc kubenswrapper[4804]: E0128 11:52:36.917302 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:52:50 crc kubenswrapper[4804]: I0128 11:52:50.915370 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:52:50 crc kubenswrapper[4804]: E0128 11:52:50.916161 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:53:03 crc kubenswrapper[4804]: I0128 11:53:03.915548 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:53:03 crc kubenswrapper[4804]: E0128 11:53:03.916098 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:53:18 crc kubenswrapper[4804]: I0128 11:53:18.915152 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:53:19 crc kubenswrapper[4804]: I0128 11:53:19.697726 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"e553f604c379352634978804bed96c120674cc97471ffd0a6e8e24b40cf10903"} Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.787740 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-wrld9"] Jan 28 11:55:31 crc kubenswrapper[4804]: E0128 11:55:31.788492 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36dace41-3e60-485b-8a38-7678187e37bc" containerName="extract-utilities" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.788503 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="36dace41-3e60-485b-8a38-7678187e37bc" containerName="extract-utilities" Jan 28 11:55:31 crc kubenswrapper[4804]: E0128 11:55:31.788530 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36dace41-3e60-485b-8a38-7678187e37bc" containerName="registry-server" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.788537 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="36dace41-3e60-485b-8a38-7678187e37bc" containerName="registry-server" Jan 28 11:55:31 crc kubenswrapper[4804]: E0128 11:55:31.788549 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="36dace41-3e60-485b-8a38-7678187e37bc" containerName="extract-content" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.788555 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="36dace41-3e60-485b-8a38-7678187e37bc" containerName="extract-content" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.788668 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="36dace41-3e60-485b-8a38-7678187e37bc" containerName="registry-server" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.789599 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.800665 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrld9"] Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.876337 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-utilities\") pod \"redhat-marketplace-wrld9\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.876398 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-catalog-content\") pod \"redhat-marketplace-wrld9\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.876449 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sh7vg\" (UniqueName: \"kubernetes.io/projected/151f894b-da15-43bf-8f8e-44b777c23b68-kube-api-access-sh7vg\") pod \"redhat-marketplace-wrld9\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.977393 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-utilities\") pod \"redhat-marketplace-wrld9\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.977605 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-catalog-content\") pod \"redhat-marketplace-wrld9\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.977753 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sh7vg\" (UniqueName: \"kubernetes.io/projected/151f894b-da15-43bf-8f8e-44b777c23b68-kube-api-access-sh7vg\") pod \"redhat-marketplace-wrld9\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.978540 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-utilities\") pod \"redhat-marketplace-wrld9\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.978627 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-catalog-content\") pod \"redhat-marketplace-wrld9\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:31 crc kubenswrapper[4804]: I0128 11:55:31.997472 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sh7vg\" (UniqueName: \"kubernetes.io/projected/151f894b-da15-43bf-8f8e-44b777c23b68-kube-api-access-sh7vg\") pod \"redhat-marketplace-wrld9\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:32 crc kubenswrapper[4804]: I0128 11:55:32.113459 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:32 crc kubenswrapper[4804]: I0128 11:55:32.540447 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrld9"] Jan 28 11:55:32 crc kubenswrapper[4804]: I0128 11:55:32.586524 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrld9" event={"ID":"151f894b-da15-43bf-8f8e-44b777c23b68","Type":"ContainerStarted","Data":"35a94545608fe9b4b67be83edb17ad0dd73fff1aa646c35e7ed33196ee854ab3"} Jan 28 11:55:33 crc kubenswrapper[4804]: I0128 11:55:33.593869 4804 generic.go:334] "Generic (PLEG): container finished" podID="151f894b-da15-43bf-8f8e-44b777c23b68" containerID="09857497b118a73d24f73f33b83445afdd364f69b6e222d20f2e302807b9e542" exitCode=0 Jan 28 11:55:33 crc kubenswrapper[4804]: I0128 11:55:33.593997 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrld9" event={"ID":"151f894b-da15-43bf-8f8e-44b777c23b68","Type":"ContainerDied","Data":"09857497b118a73d24f73f33b83445afdd364f69b6e222d20f2e302807b9e542"} Jan 28 11:55:33 crc kubenswrapper[4804]: I0128 11:55:33.596453 4804 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.191258 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dzjjh"] Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.193096 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.210265 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntgh6\" (UniqueName: \"kubernetes.io/projected/b808f833-2a0c-4378-96c7-d4b01ce592c1-kube-api-access-ntgh6\") pod \"community-operators-dzjjh\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.210743 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-utilities\") pod \"community-operators-dzjjh\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.210799 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-catalog-content\") pod \"community-operators-dzjjh\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.225013 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dzjjh"] Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.312015 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntgh6\" (UniqueName: \"kubernetes.io/projected/b808f833-2a0c-4378-96c7-d4b01ce592c1-kube-api-access-ntgh6\") pod \"community-operators-dzjjh\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.312074 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-utilities\") pod \"community-operators-dzjjh\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.312156 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-catalog-content\") pod \"community-operators-dzjjh\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.312569 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-utilities\") pod \"community-operators-dzjjh\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.312648 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-catalog-content\") pod \"community-operators-dzjjh\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.336687 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntgh6\" (UniqueName: \"kubernetes.io/projected/b808f833-2a0c-4378-96c7-d4b01ce592c1-kube-api-access-ntgh6\") pod \"community-operators-dzjjh\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.584633 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.602133 4804 generic.go:334] "Generic (PLEG): container finished" podID="151f894b-da15-43bf-8f8e-44b777c23b68" containerID="d8a8cf19a3667150e07fc28293c553ac3c16d95a46ac81f695c1dc6c34e1fbc7" exitCode=0 Jan 28 11:55:34 crc kubenswrapper[4804]: I0128 11:55:34.602181 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrld9" event={"ID":"151f894b-da15-43bf-8f8e-44b777c23b68","Type":"ContainerDied","Data":"d8a8cf19a3667150e07fc28293c553ac3c16d95a46ac81f695c1dc6c34e1fbc7"} Jan 28 11:55:35 crc kubenswrapper[4804]: I0128 11:55:35.131278 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dzjjh"] Jan 28 11:55:35 crc kubenswrapper[4804]: W0128 11:55:35.149556 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb808f833_2a0c_4378_96c7_d4b01ce592c1.slice/crio-8ecd95cae896c03c187e63450a86281bcb483889905dd243efa54281739d745f WatchSource:0}: Error finding container 8ecd95cae896c03c187e63450a86281bcb483889905dd243efa54281739d745f: Status 404 returned error can't find the container with id 8ecd95cae896c03c187e63450a86281bcb483889905dd243efa54281739d745f Jan 28 11:55:35 crc kubenswrapper[4804]: I0128 11:55:35.610361 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrld9" event={"ID":"151f894b-da15-43bf-8f8e-44b777c23b68","Type":"ContainerStarted","Data":"e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60"} Jan 28 11:55:35 crc kubenswrapper[4804]: I0128 11:55:35.612603 4804 generic.go:334] "Generic (PLEG): container finished" podID="b808f833-2a0c-4378-96c7-d4b01ce592c1" containerID="cb95515fba8896aa8f58f7264a95db4df434bb5d342570ccd7c478bd07868bea" exitCode=0 Jan 28 11:55:35 crc kubenswrapper[4804]: I0128 11:55:35.612654 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzjjh" event={"ID":"b808f833-2a0c-4378-96c7-d4b01ce592c1","Type":"ContainerDied","Data":"cb95515fba8896aa8f58f7264a95db4df434bb5d342570ccd7c478bd07868bea"} Jan 28 11:55:35 crc kubenswrapper[4804]: I0128 11:55:35.612704 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzjjh" event={"ID":"b808f833-2a0c-4378-96c7-d4b01ce592c1","Type":"ContainerStarted","Data":"8ecd95cae896c03c187e63450a86281bcb483889905dd243efa54281739d745f"} Jan 28 11:55:35 crc kubenswrapper[4804]: I0128 11:55:35.636258 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-wrld9" podStartSLOduration=3.193877301 podStartE2EDuration="4.63622779s" podCreationTimestamp="2026-01-28 11:55:31 +0000 UTC" firstStartedPulling="2026-01-28 11:55:33.596235202 +0000 UTC m=+2009.391115196" lastFinishedPulling="2026-01-28 11:55:35.038585701 +0000 UTC m=+2010.833465685" observedRunningTime="2026-01-28 11:55:35.629000916 +0000 UTC m=+2011.423880900" watchObservedRunningTime="2026-01-28 11:55:35.63622779 +0000 UTC m=+2011.431107774" Jan 28 11:55:37 crc kubenswrapper[4804]: I0128 11:55:37.625480 4804 generic.go:334] "Generic (PLEG): container finished" podID="b808f833-2a0c-4378-96c7-d4b01ce592c1" containerID="0189eb18d908e9eae746ff753a2b4759694081f5e06e0ce412145dc609c746c3" exitCode=0 Jan 28 11:55:37 crc kubenswrapper[4804]: I0128 11:55:37.625590 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzjjh" event={"ID":"b808f833-2a0c-4378-96c7-d4b01ce592c1","Type":"ContainerDied","Data":"0189eb18d908e9eae746ff753a2b4759694081f5e06e0ce412145dc609c746c3"} Jan 28 11:55:38 crc kubenswrapper[4804]: I0128 11:55:38.636124 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzjjh" event={"ID":"b808f833-2a0c-4378-96c7-d4b01ce592c1","Type":"ContainerStarted","Data":"b7c483c788932a843294d04d64bb9c669c6e1f37840bec0f8083f1de9b97b958"} Jan 28 11:55:38 crc kubenswrapper[4804]: I0128 11:55:38.662715 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dzjjh" podStartSLOduration=2.194906669 podStartE2EDuration="4.662690771s" podCreationTimestamp="2026-01-28 11:55:34 +0000 UTC" firstStartedPulling="2026-01-28 11:55:35.613854655 +0000 UTC m=+2011.408734639" lastFinishedPulling="2026-01-28 11:55:38.081638757 +0000 UTC m=+2013.876518741" observedRunningTime="2026-01-28 11:55:38.654082594 +0000 UTC m=+2014.448962588" watchObservedRunningTime="2026-01-28 11:55:38.662690771 +0000 UTC m=+2014.457570775" Jan 28 11:55:42 crc kubenswrapper[4804]: I0128 11:55:42.114277 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:42 crc kubenswrapper[4804]: I0128 11:55:42.114732 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:42 crc kubenswrapper[4804]: I0128 11:55:42.159135 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:42 crc kubenswrapper[4804]: I0128 11:55:42.582836 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:55:42 crc kubenswrapper[4804]: I0128 11:55:42.582947 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:55:42 crc kubenswrapper[4804]: I0128 11:55:42.697488 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:43 crc kubenswrapper[4804]: I0128 11:55:43.976872 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrld9"] Jan 28 11:55:44 crc kubenswrapper[4804]: I0128 11:55:44.584751 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:44 crc kubenswrapper[4804]: I0128 11:55:44.585051 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:44 crc kubenswrapper[4804]: I0128 11:55:44.639396 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:44 crc kubenswrapper[4804]: I0128 11:55:44.688127 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-wrld9" podUID="151f894b-da15-43bf-8f8e-44b777c23b68" containerName="registry-server" containerID="cri-o://e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60" gracePeriod=2 Jan 28 11:55:44 crc kubenswrapper[4804]: I0128 11:55:44.734009 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.188442 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.287827 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-catalog-content\") pod \"151f894b-da15-43bf-8f8e-44b777c23b68\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.288061 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sh7vg\" (UniqueName: \"kubernetes.io/projected/151f894b-da15-43bf-8f8e-44b777c23b68-kube-api-access-sh7vg\") pod \"151f894b-da15-43bf-8f8e-44b777c23b68\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.288184 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-utilities\") pod \"151f894b-da15-43bf-8f8e-44b777c23b68\" (UID: \"151f894b-da15-43bf-8f8e-44b777c23b68\") " Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.289866 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-utilities" (OuterVolumeSpecName: "utilities") pod "151f894b-da15-43bf-8f8e-44b777c23b68" (UID: "151f894b-da15-43bf-8f8e-44b777c23b68"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.298191 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/151f894b-da15-43bf-8f8e-44b777c23b68-kube-api-access-sh7vg" (OuterVolumeSpecName: "kube-api-access-sh7vg") pod "151f894b-da15-43bf-8f8e-44b777c23b68" (UID: "151f894b-da15-43bf-8f8e-44b777c23b68"). InnerVolumeSpecName "kube-api-access-sh7vg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.390615 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sh7vg\" (UniqueName: \"kubernetes.io/projected/151f894b-da15-43bf-8f8e-44b777c23b68-kube-api-access-sh7vg\") on node \"crc\" DevicePath \"\"" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.390661 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.472729 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "151f894b-da15-43bf-8f8e-44b777c23b68" (UID: "151f894b-da15-43bf-8f8e-44b777c23b68"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.492205 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/151f894b-da15-43bf-8f8e-44b777c23b68-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.695550 4804 generic.go:334] "Generic (PLEG): container finished" podID="151f894b-da15-43bf-8f8e-44b777c23b68" containerID="e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60" exitCode=0 Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.695645 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-wrld9" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.695725 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrld9" event={"ID":"151f894b-da15-43bf-8f8e-44b777c23b68","Type":"ContainerDied","Data":"e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60"} Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.695837 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-wrld9" event={"ID":"151f894b-da15-43bf-8f8e-44b777c23b68","Type":"ContainerDied","Data":"35a94545608fe9b4b67be83edb17ad0dd73fff1aa646c35e7ed33196ee854ab3"} Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.695868 4804 scope.go:117] "RemoveContainer" containerID="e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.722375 4804 scope.go:117] "RemoveContainer" containerID="d8a8cf19a3667150e07fc28293c553ac3c16d95a46ac81f695c1dc6c34e1fbc7" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.731069 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrld9"] Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.736375 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-wrld9"] Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.747277 4804 scope.go:117] "RemoveContainer" containerID="09857497b118a73d24f73f33b83445afdd364f69b6e222d20f2e302807b9e542" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.770139 4804 scope.go:117] "RemoveContainer" containerID="e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60" Jan 28 11:55:45 crc kubenswrapper[4804]: E0128 11:55:45.770551 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60\": container with ID starting with e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60 not found: ID does not exist" containerID="e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.770592 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60"} err="failed to get container status \"e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60\": rpc error: code = NotFound desc = could not find container \"e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60\": container with ID starting with e75593065f7711172597b451f0376e7b8a8f66df83ce16d4df2d8616cfd7af60 not found: ID does not exist" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.770627 4804 scope.go:117] "RemoveContainer" containerID="d8a8cf19a3667150e07fc28293c553ac3c16d95a46ac81f695c1dc6c34e1fbc7" Jan 28 11:55:45 crc kubenswrapper[4804]: E0128 11:55:45.771102 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d8a8cf19a3667150e07fc28293c553ac3c16d95a46ac81f695c1dc6c34e1fbc7\": container with ID starting with d8a8cf19a3667150e07fc28293c553ac3c16d95a46ac81f695c1dc6c34e1fbc7 not found: ID does not exist" containerID="d8a8cf19a3667150e07fc28293c553ac3c16d95a46ac81f695c1dc6c34e1fbc7" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.771130 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d8a8cf19a3667150e07fc28293c553ac3c16d95a46ac81f695c1dc6c34e1fbc7"} err="failed to get container status \"d8a8cf19a3667150e07fc28293c553ac3c16d95a46ac81f695c1dc6c34e1fbc7\": rpc error: code = NotFound desc = could not find container \"d8a8cf19a3667150e07fc28293c553ac3c16d95a46ac81f695c1dc6c34e1fbc7\": container with ID starting with d8a8cf19a3667150e07fc28293c553ac3c16d95a46ac81f695c1dc6c34e1fbc7 not found: ID does not exist" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.771148 4804 scope.go:117] "RemoveContainer" containerID="09857497b118a73d24f73f33b83445afdd364f69b6e222d20f2e302807b9e542" Jan 28 11:55:45 crc kubenswrapper[4804]: E0128 11:55:45.771421 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09857497b118a73d24f73f33b83445afdd364f69b6e222d20f2e302807b9e542\": container with ID starting with 09857497b118a73d24f73f33b83445afdd364f69b6e222d20f2e302807b9e542 not found: ID does not exist" containerID="09857497b118a73d24f73f33b83445afdd364f69b6e222d20f2e302807b9e542" Jan 28 11:55:45 crc kubenswrapper[4804]: I0128 11:55:45.771461 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09857497b118a73d24f73f33b83445afdd364f69b6e222d20f2e302807b9e542"} err="failed to get container status \"09857497b118a73d24f73f33b83445afdd364f69b6e222d20f2e302807b9e542\": rpc error: code = NotFound desc = could not find container \"09857497b118a73d24f73f33b83445afdd364f69b6e222d20f2e302807b9e542\": container with ID starting with 09857497b118a73d24f73f33b83445afdd364f69b6e222d20f2e302807b9e542 not found: ID does not exist" Jan 28 11:55:46 crc kubenswrapper[4804]: I0128 11:55:46.923605 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="151f894b-da15-43bf-8f8e-44b777c23b68" path="/var/lib/kubelet/pods/151f894b-da15-43bf-8f8e-44b777c23b68/volumes" Jan 28 11:55:46 crc kubenswrapper[4804]: I0128 11:55:46.977289 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dzjjh"] Jan 28 11:55:46 crc kubenswrapper[4804]: I0128 11:55:46.977514 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dzjjh" podUID="b808f833-2a0c-4378-96c7-d4b01ce592c1" containerName="registry-server" containerID="cri-o://b7c483c788932a843294d04d64bb9c669c6e1f37840bec0f8083f1de9b97b958" gracePeriod=2 Jan 28 11:55:47 crc kubenswrapper[4804]: I0128 11:55:47.710269 4804 generic.go:334] "Generic (PLEG): container finished" podID="b808f833-2a0c-4378-96c7-d4b01ce592c1" containerID="b7c483c788932a843294d04d64bb9c669c6e1f37840bec0f8083f1de9b97b958" exitCode=0 Jan 28 11:55:47 crc kubenswrapper[4804]: I0128 11:55:47.710308 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzjjh" event={"ID":"b808f833-2a0c-4378-96c7-d4b01ce592c1","Type":"ContainerDied","Data":"b7c483c788932a843294d04d64bb9c669c6e1f37840bec0f8083f1de9b97b958"} Jan 28 11:55:48 crc kubenswrapper[4804]: I0128 11:55:48.808931 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:48 crc kubenswrapper[4804]: I0128 11:55:48.833124 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ntgh6\" (UniqueName: \"kubernetes.io/projected/b808f833-2a0c-4378-96c7-d4b01ce592c1-kube-api-access-ntgh6\") pod \"b808f833-2a0c-4378-96c7-d4b01ce592c1\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " Jan 28 11:55:48 crc kubenswrapper[4804]: I0128 11:55:48.833212 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-utilities\") pod \"b808f833-2a0c-4378-96c7-d4b01ce592c1\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " Jan 28 11:55:48 crc kubenswrapper[4804]: I0128 11:55:48.833263 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-catalog-content\") pod \"b808f833-2a0c-4378-96c7-d4b01ce592c1\" (UID: \"b808f833-2a0c-4378-96c7-d4b01ce592c1\") " Jan 28 11:55:48 crc kubenswrapper[4804]: I0128 11:55:48.834131 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-utilities" (OuterVolumeSpecName: "utilities") pod "b808f833-2a0c-4378-96c7-d4b01ce592c1" (UID: "b808f833-2a0c-4378-96c7-d4b01ce592c1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:55:48 crc kubenswrapper[4804]: I0128 11:55:48.838420 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b808f833-2a0c-4378-96c7-d4b01ce592c1-kube-api-access-ntgh6" (OuterVolumeSpecName: "kube-api-access-ntgh6") pod "b808f833-2a0c-4378-96c7-d4b01ce592c1" (UID: "b808f833-2a0c-4378-96c7-d4b01ce592c1"). InnerVolumeSpecName "kube-api-access-ntgh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:55:48 crc kubenswrapper[4804]: I0128 11:55:48.934179 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ntgh6\" (UniqueName: \"kubernetes.io/projected/b808f833-2a0c-4378-96c7-d4b01ce592c1-kube-api-access-ntgh6\") on node \"crc\" DevicePath \"\"" Jan 28 11:55:48 crc kubenswrapper[4804]: I0128 11:55:48.934210 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:55:49 crc kubenswrapper[4804]: I0128 11:55:49.725907 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dzjjh" event={"ID":"b808f833-2a0c-4378-96c7-d4b01ce592c1","Type":"ContainerDied","Data":"8ecd95cae896c03c187e63450a86281bcb483889905dd243efa54281739d745f"} Jan 28 11:55:49 crc kubenswrapper[4804]: I0128 11:55:49.725955 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dzjjh" Jan 28 11:55:49 crc kubenswrapper[4804]: I0128 11:55:49.726237 4804 scope.go:117] "RemoveContainer" containerID="b7c483c788932a843294d04d64bb9c669c6e1f37840bec0f8083f1de9b97b958" Jan 28 11:55:49 crc kubenswrapper[4804]: I0128 11:55:49.733379 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b808f833-2a0c-4378-96c7-d4b01ce592c1" (UID: "b808f833-2a0c-4378-96c7-d4b01ce592c1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:55:49 crc kubenswrapper[4804]: I0128 11:55:49.743739 4804 scope.go:117] "RemoveContainer" containerID="0189eb18d908e9eae746ff753a2b4759694081f5e06e0ce412145dc609c746c3" Jan 28 11:55:49 crc kubenswrapper[4804]: I0128 11:55:49.744426 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b808f833-2a0c-4378-96c7-d4b01ce592c1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:55:49 crc kubenswrapper[4804]: I0128 11:55:49.764336 4804 scope.go:117] "RemoveContainer" containerID="cb95515fba8896aa8f58f7264a95db4df434bb5d342570ccd7c478bd07868bea" Jan 28 11:55:50 crc kubenswrapper[4804]: I0128 11:55:50.056122 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dzjjh"] Jan 28 11:55:50 crc kubenswrapper[4804]: I0128 11:55:50.061932 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dzjjh"] Jan 28 11:55:50 crc kubenswrapper[4804]: I0128 11:55:50.924276 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b808f833-2a0c-4378-96c7-d4b01ce592c1" path="/var/lib/kubelet/pods/b808f833-2a0c-4378-96c7-d4b01ce592c1/volumes" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.581374 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-pbqrx"] Jan 28 11:56:09 crc kubenswrapper[4804]: E0128 11:56:09.582214 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b808f833-2a0c-4378-96c7-d4b01ce592c1" containerName="extract-utilities" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.582228 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b808f833-2a0c-4378-96c7-d4b01ce592c1" containerName="extract-utilities" Jan 28 11:56:09 crc kubenswrapper[4804]: E0128 11:56:09.582262 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b808f833-2a0c-4378-96c7-d4b01ce592c1" containerName="extract-content" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.582270 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b808f833-2a0c-4378-96c7-d4b01ce592c1" containerName="extract-content" Jan 28 11:56:09 crc kubenswrapper[4804]: E0128 11:56:09.582279 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="151f894b-da15-43bf-8f8e-44b777c23b68" containerName="extract-content" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.582285 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="151f894b-da15-43bf-8f8e-44b777c23b68" containerName="extract-content" Jan 28 11:56:09 crc kubenswrapper[4804]: E0128 11:56:09.582301 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="151f894b-da15-43bf-8f8e-44b777c23b68" containerName="extract-utilities" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.582307 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="151f894b-da15-43bf-8f8e-44b777c23b68" containerName="extract-utilities" Jan 28 11:56:09 crc kubenswrapper[4804]: E0128 11:56:09.582315 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b808f833-2a0c-4378-96c7-d4b01ce592c1" containerName="registry-server" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.582321 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b808f833-2a0c-4378-96c7-d4b01ce592c1" containerName="registry-server" Jan 28 11:56:09 crc kubenswrapper[4804]: E0128 11:56:09.582329 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="151f894b-da15-43bf-8f8e-44b777c23b68" containerName="registry-server" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.582335 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="151f894b-da15-43bf-8f8e-44b777c23b68" containerName="registry-server" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.582449 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="b808f833-2a0c-4378-96c7-d4b01ce592c1" containerName="registry-server" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.582461 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="151f894b-da15-43bf-8f8e-44b777c23b68" containerName="registry-server" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.583574 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.597552 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pbqrx"] Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.619292 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-catalog-content\") pod \"redhat-operators-pbqrx\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.619344 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5dz4d\" (UniqueName: \"kubernetes.io/projected/843a2adb-570f-46ac-8c83-791c0891960b-kube-api-access-5dz4d\") pod \"redhat-operators-pbqrx\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.619444 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-utilities\") pod \"redhat-operators-pbqrx\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.720490 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-utilities\") pod \"redhat-operators-pbqrx\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.720561 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-catalog-content\") pod \"redhat-operators-pbqrx\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.720587 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5dz4d\" (UniqueName: \"kubernetes.io/projected/843a2adb-570f-46ac-8c83-791c0891960b-kube-api-access-5dz4d\") pod \"redhat-operators-pbqrx\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.721428 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-utilities\") pod \"redhat-operators-pbqrx\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.721452 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-catalog-content\") pod \"redhat-operators-pbqrx\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.743402 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5dz4d\" (UniqueName: \"kubernetes.io/projected/843a2adb-570f-46ac-8c83-791c0891960b-kube-api-access-5dz4d\") pod \"redhat-operators-pbqrx\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:09 crc kubenswrapper[4804]: I0128 11:56:09.901282 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:10 crc kubenswrapper[4804]: I0128 11:56:10.342699 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-pbqrx"] Jan 28 11:56:10 crc kubenswrapper[4804]: I0128 11:56:10.857902 4804 generic.go:334] "Generic (PLEG): container finished" podID="843a2adb-570f-46ac-8c83-791c0891960b" containerID="95a4b2014063da174497225717ec295ea3000f41574d3609bed363632e87fcdc" exitCode=0 Jan 28 11:56:10 crc kubenswrapper[4804]: I0128 11:56:10.858002 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbqrx" event={"ID":"843a2adb-570f-46ac-8c83-791c0891960b","Type":"ContainerDied","Data":"95a4b2014063da174497225717ec295ea3000f41574d3609bed363632e87fcdc"} Jan 28 11:56:10 crc kubenswrapper[4804]: I0128 11:56:10.858205 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbqrx" event={"ID":"843a2adb-570f-46ac-8c83-791c0891960b","Type":"ContainerStarted","Data":"272face9840fcdab87bd3e3f81b6bc480565580aadc9dd8b088e8d27d255ed68"} Jan 28 11:56:11 crc kubenswrapper[4804]: I0128 11:56:11.867664 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbqrx" event={"ID":"843a2adb-570f-46ac-8c83-791c0891960b","Type":"ContainerStarted","Data":"b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989"} Jan 28 11:56:12 crc kubenswrapper[4804]: I0128 11:56:12.582082 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:56:12 crc kubenswrapper[4804]: I0128 11:56:12.582180 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:56:12 crc kubenswrapper[4804]: I0128 11:56:12.875855 4804 generic.go:334] "Generic (PLEG): container finished" podID="843a2adb-570f-46ac-8c83-791c0891960b" containerID="b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989" exitCode=0 Jan 28 11:56:12 crc kubenswrapper[4804]: I0128 11:56:12.875996 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbqrx" event={"ID":"843a2adb-570f-46ac-8c83-791c0891960b","Type":"ContainerDied","Data":"b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989"} Jan 28 11:56:13 crc kubenswrapper[4804]: I0128 11:56:13.884787 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbqrx" event={"ID":"843a2adb-570f-46ac-8c83-791c0891960b","Type":"ContainerStarted","Data":"af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967"} Jan 28 11:56:13 crc kubenswrapper[4804]: I0128 11:56:13.905823 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-pbqrx" podStartSLOduration=2.485765955 podStartE2EDuration="4.90563282s" podCreationTimestamp="2026-01-28 11:56:09 +0000 UTC" firstStartedPulling="2026-01-28 11:56:10.859686023 +0000 UTC m=+2046.654566007" lastFinishedPulling="2026-01-28 11:56:13.279552888 +0000 UTC m=+2049.074432872" observedRunningTime="2026-01-28 11:56:13.90112031 +0000 UTC m=+2049.696000294" watchObservedRunningTime="2026-01-28 11:56:13.90563282 +0000 UTC m=+2049.700512804" Jan 28 11:56:19 crc kubenswrapper[4804]: I0128 11:56:19.902493 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:19 crc kubenswrapper[4804]: I0128 11:56:19.903069 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:19 crc kubenswrapper[4804]: I0128 11:56:19.940784 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:19 crc kubenswrapper[4804]: I0128 11:56:19.992327 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:20 crc kubenswrapper[4804]: I0128 11:56:20.172071 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pbqrx"] Jan 28 11:56:21 crc kubenswrapper[4804]: I0128 11:56:21.930179 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-pbqrx" podUID="843a2adb-570f-46ac-8c83-791c0891960b" containerName="registry-server" containerID="cri-o://af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967" gracePeriod=2 Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.287402 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.391456 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-catalog-content\") pod \"843a2adb-570f-46ac-8c83-791c0891960b\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.391620 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-utilities\") pod \"843a2adb-570f-46ac-8c83-791c0891960b\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.391654 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5dz4d\" (UniqueName: \"kubernetes.io/projected/843a2adb-570f-46ac-8c83-791c0891960b-kube-api-access-5dz4d\") pod \"843a2adb-570f-46ac-8c83-791c0891960b\" (UID: \"843a2adb-570f-46ac-8c83-791c0891960b\") " Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.392512 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-utilities" (OuterVolumeSpecName: "utilities") pod "843a2adb-570f-46ac-8c83-791c0891960b" (UID: "843a2adb-570f-46ac-8c83-791c0891960b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.396933 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/843a2adb-570f-46ac-8c83-791c0891960b-kube-api-access-5dz4d" (OuterVolumeSpecName: "kube-api-access-5dz4d") pod "843a2adb-570f-46ac-8c83-791c0891960b" (UID: "843a2adb-570f-46ac-8c83-791c0891960b"). InnerVolumeSpecName "kube-api-access-5dz4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.493299 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5dz4d\" (UniqueName: \"kubernetes.io/projected/843a2adb-570f-46ac-8c83-791c0891960b-kube-api-access-5dz4d\") on node \"crc\" DevicePath \"\"" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.493340 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.938973 4804 generic.go:334] "Generic (PLEG): container finished" podID="843a2adb-570f-46ac-8c83-791c0891960b" containerID="af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967" exitCode=0 Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.939079 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbqrx" event={"ID":"843a2adb-570f-46ac-8c83-791c0891960b","Type":"ContainerDied","Data":"af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967"} Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.941027 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-pbqrx" event={"ID":"843a2adb-570f-46ac-8c83-791c0891960b","Type":"ContainerDied","Data":"272face9840fcdab87bd3e3f81b6bc480565580aadc9dd8b088e8d27d255ed68"} Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.941063 4804 scope.go:117] "RemoveContainer" containerID="af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.939126 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-pbqrx" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.957200 4804 scope.go:117] "RemoveContainer" containerID="b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.973893 4804 scope.go:117] "RemoveContainer" containerID="95a4b2014063da174497225717ec295ea3000f41574d3609bed363632e87fcdc" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.994528 4804 scope.go:117] "RemoveContainer" containerID="af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967" Jan 28 11:56:22 crc kubenswrapper[4804]: E0128 11:56:22.994973 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967\": container with ID starting with af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967 not found: ID does not exist" containerID="af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.995011 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967"} err="failed to get container status \"af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967\": rpc error: code = NotFound desc = could not find container \"af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967\": container with ID starting with af0ff82e2a39670b719e156c7393b6c463af8c5393490a918ee723324578c967 not found: ID does not exist" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.995035 4804 scope.go:117] "RemoveContainer" containerID="b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989" Jan 28 11:56:22 crc kubenswrapper[4804]: E0128 11:56:22.995322 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989\": container with ID starting with b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989 not found: ID does not exist" containerID="b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.995346 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989"} err="failed to get container status \"b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989\": rpc error: code = NotFound desc = could not find container \"b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989\": container with ID starting with b4023f7109d70f78a896a348095aee73746c692fcb89bf2eded9559665e97989 not found: ID does not exist" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.995362 4804 scope.go:117] "RemoveContainer" containerID="95a4b2014063da174497225717ec295ea3000f41574d3609bed363632e87fcdc" Jan 28 11:56:22 crc kubenswrapper[4804]: E0128 11:56:22.995569 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95a4b2014063da174497225717ec295ea3000f41574d3609bed363632e87fcdc\": container with ID starting with 95a4b2014063da174497225717ec295ea3000f41574d3609bed363632e87fcdc not found: ID does not exist" containerID="95a4b2014063da174497225717ec295ea3000f41574d3609bed363632e87fcdc" Jan 28 11:56:22 crc kubenswrapper[4804]: I0128 11:56:22.995596 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95a4b2014063da174497225717ec295ea3000f41574d3609bed363632e87fcdc"} err="failed to get container status \"95a4b2014063da174497225717ec295ea3000f41574d3609bed363632e87fcdc\": rpc error: code = NotFound desc = could not find container \"95a4b2014063da174497225717ec295ea3000f41574d3609bed363632e87fcdc\": container with ID starting with 95a4b2014063da174497225717ec295ea3000f41574d3609bed363632e87fcdc not found: ID does not exist" Jan 28 11:56:23 crc kubenswrapper[4804]: I0128 11:56:23.783691 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "843a2adb-570f-46ac-8c83-791c0891960b" (UID: "843a2adb-570f-46ac-8c83-791c0891960b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 11:56:23 crc kubenswrapper[4804]: I0128 11:56:23.812632 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/843a2adb-570f-46ac-8c83-791c0891960b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 11:56:23 crc kubenswrapper[4804]: I0128 11:56:23.871770 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-pbqrx"] Jan 28 11:56:23 crc kubenswrapper[4804]: I0128 11:56:23.879158 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-pbqrx"] Jan 28 11:56:24 crc kubenswrapper[4804]: I0128 11:56:24.923995 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="843a2adb-570f-46ac-8c83-791c0891960b" path="/var/lib/kubelet/pods/843a2adb-570f-46ac-8c83-791c0891960b/volumes" Jan 28 11:56:42 crc kubenswrapper[4804]: I0128 11:56:42.581814 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:56:42 crc kubenswrapper[4804]: I0128 11:56:42.582632 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:56:42 crc kubenswrapper[4804]: I0128 11:56:42.582695 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:56:42 crc kubenswrapper[4804]: I0128 11:56:42.584810 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e553f604c379352634978804bed96c120674cc97471ffd0a6e8e24b40cf10903"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 11:56:42 crc kubenswrapper[4804]: I0128 11:56:42.584938 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://e553f604c379352634978804bed96c120674cc97471ffd0a6e8e24b40cf10903" gracePeriod=600 Jan 28 11:56:43 crc kubenswrapper[4804]: I0128 11:56:43.088958 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="e553f604c379352634978804bed96c120674cc97471ffd0a6e8e24b40cf10903" exitCode=0 Jan 28 11:56:43 crc kubenswrapper[4804]: I0128 11:56:43.089246 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"e553f604c379352634978804bed96c120674cc97471ffd0a6e8e24b40cf10903"} Jan 28 11:56:43 crc kubenswrapper[4804]: I0128 11:56:43.089666 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2"} Jan 28 11:56:43 crc kubenswrapper[4804]: I0128 11:56:43.089686 4804 scope.go:117] "RemoveContainer" containerID="b037a840139958e20d6caaca7e1ef48a37d6614d00120e02eec4c96e6eb2e30d" Jan 28 11:58:42 crc kubenswrapper[4804]: I0128 11:58:42.582694 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:58:42 crc kubenswrapper[4804]: I0128 11:58:42.583282 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:59:12 crc kubenswrapper[4804]: I0128 11:59:12.581863 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:59:12 crc kubenswrapper[4804]: I0128 11:59:12.582520 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:59:42 crc kubenswrapper[4804]: I0128 11:59:42.582210 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 11:59:42 crc kubenswrapper[4804]: I0128 11:59:42.583026 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 11:59:42 crc kubenswrapper[4804]: I0128 11:59:42.583137 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 11:59:42 crc kubenswrapper[4804]: I0128 11:59:42.583805 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 11:59:42 crc kubenswrapper[4804]: I0128 11:59:42.583861 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" gracePeriod=600 Jan 28 11:59:42 crc kubenswrapper[4804]: E0128 11:59:42.733402 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:59:43 crc kubenswrapper[4804]: I0128 11:59:43.326319 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" exitCode=0 Jan 28 11:59:43 crc kubenswrapper[4804]: I0128 11:59:43.326380 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2"} Jan 28 11:59:43 crc kubenswrapper[4804]: I0128 11:59:43.326437 4804 scope.go:117] "RemoveContainer" containerID="e553f604c379352634978804bed96c120674cc97471ffd0a6e8e24b40cf10903" Jan 28 11:59:43 crc kubenswrapper[4804]: I0128 11:59:43.327233 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 11:59:43 crc kubenswrapper[4804]: E0128 11:59:43.327672 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 11:59:57 crc kubenswrapper[4804]: I0128 11:59:57.915201 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 11:59:57 crc kubenswrapper[4804]: E0128 11:59:57.915919 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.146305 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6"] Jan 28 12:00:00 crc kubenswrapper[4804]: E0128 12:00:00.146696 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="843a2adb-570f-46ac-8c83-791c0891960b" containerName="registry-server" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.146721 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="843a2adb-570f-46ac-8c83-791c0891960b" containerName="registry-server" Jan 28 12:00:00 crc kubenswrapper[4804]: E0128 12:00:00.146735 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="843a2adb-570f-46ac-8c83-791c0891960b" containerName="extract-utilities" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.146742 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="843a2adb-570f-46ac-8c83-791c0891960b" containerName="extract-utilities" Jan 28 12:00:00 crc kubenswrapper[4804]: E0128 12:00:00.146756 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="843a2adb-570f-46ac-8c83-791c0891960b" containerName="extract-content" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.146764 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="843a2adb-570f-46ac-8c83-791c0891960b" containerName="extract-content" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.146930 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="843a2adb-570f-46ac-8c83-791c0891960b" containerName="registry-server" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.147494 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.153451 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.153601 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6"] Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.153657 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.223264 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l45sn\" (UniqueName: \"kubernetes.io/projected/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-kube-api-access-l45sn\") pod \"collect-profiles-29493360-rrbg6\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.223332 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-config-volume\") pod \"collect-profiles-29493360-rrbg6\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.223382 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-secret-volume\") pod \"collect-profiles-29493360-rrbg6\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.324238 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l45sn\" (UniqueName: \"kubernetes.io/projected/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-kube-api-access-l45sn\") pod \"collect-profiles-29493360-rrbg6\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.324290 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-config-volume\") pod \"collect-profiles-29493360-rrbg6\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.324332 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-secret-volume\") pod \"collect-profiles-29493360-rrbg6\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.325253 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-config-volume\") pod \"collect-profiles-29493360-rrbg6\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.329967 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-secret-volume\") pod \"collect-profiles-29493360-rrbg6\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.341599 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l45sn\" (UniqueName: \"kubernetes.io/projected/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-kube-api-access-l45sn\") pod \"collect-profiles-29493360-rrbg6\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.466120 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:00 crc kubenswrapper[4804]: I0128 12:00:00.891155 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6"] Jan 28 12:00:01 crc kubenswrapper[4804]: I0128 12:00:01.463743 4804 generic.go:334] "Generic (PLEG): container finished" podID="aa991fe4-fe41-454b-b0ab-03e5d7a546d7" containerID="622c408f48add91adb423999741bd11717dcabf800c16d5c9d66d66f2f7c526d" exitCode=0 Jan 28 12:00:01 crc kubenswrapper[4804]: I0128 12:00:01.465044 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" event={"ID":"aa991fe4-fe41-454b-b0ab-03e5d7a546d7","Type":"ContainerDied","Data":"622c408f48add91adb423999741bd11717dcabf800c16d5c9d66d66f2f7c526d"} Jan 28 12:00:01 crc kubenswrapper[4804]: I0128 12:00:01.465146 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" event={"ID":"aa991fe4-fe41-454b-b0ab-03e5d7a546d7","Type":"ContainerStarted","Data":"73ff2a24e8df59049dadc8e7977d7a6a20756ce86df0dca087d540534a76bb66"} Jan 28 12:00:02 crc kubenswrapper[4804]: I0128 12:00:02.751654 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:02 crc kubenswrapper[4804]: I0128 12:00:02.874011 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-secret-volume\") pod \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " Jan 28 12:00:02 crc kubenswrapper[4804]: I0128 12:00:02.874499 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-config-volume\") pod \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " Jan 28 12:00:02 crc kubenswrapper[4804]: I0128 12:00:02.874599 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l45sn\" (UniqueName: \"kubernetes.io/projected/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-kube-api-access-l45sn\") pod \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\" (UID: \"aa991fe4-fe41-454b-b0ab-03e5d7a546d7\") " Jan 28 12:00:02 crc kubenswrapper[4804]: I0128 12:00:02.875731 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-config-volume" (OuterVolumeSpecName: "config-volume") pod "aa991fe4-fe41-454b-b0ab-03e5d7a546d7" (UID: "aa991fe4-fe41-454b-b0ab-03e5d7a546d7"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 12:00:02 crc kubenswrapper[4804]: I0128 12:00:02.882929 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "aa991fe4-fe41-454b-b0ab-03e5d7a546d7" (UID: "aa991fe4-fe41-454b-b0ab-03e5d7a546d7"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 12:00:02 crc kubenswrapper[4804]: I0128 12:00:02.884496 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-kube-api-access-l45sn" (OuterVolumeSpecName: "kube-api-access-l45sn") pod "aa991fe4-fe41-454b-b0ab-03e5d7a546d7" (UID: "aa991fe4-fe41-454b-b0ab-03e5d7a546d7"). InnerVolumeSpecName "kube-api-access-l45sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:00:02 crc kubenswrapper[4804]: I0128 12:00:02.979750 4804 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 12:00:02 crc kubenswrapper[4804]: I0128 12:00:02.979793 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l45sn\" (UniqueName: \"kubernetes.io/projected/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-kube-api-access-l45sn\") on node \"crc\" DevicePath \"\"" Jan 28 12:00:02 crc kubenswrapper[4804]: I0128 12:00:02.979807 4804 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/aa991fe4-fe41-454b-b0ab-03e5d7a546d7-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 12:00:03 crc kubenswrapper[4804]: I0128 12:00:03.479129 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" event={"ID":"aa991fe4-fe41-454b-b0ab-03e5d7a546d7","Type":"ContainerDied","Data":"73ff2a24e8df59049dadc8e7977d7a6a20756ce86df0dca087d540534a76bb66"} Jan 28 12:00:03 crc kubenswrapper[4804]: I0128 12:00:03.479462 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="73ff2a24e8df59049dadc8e7977d7a6a20756ce86df0dca087d540534a76bb66" Jan 28 12:00:03 crc kubenswrapper[4804]: I0128 12:00:03.479192 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493360-rrbg6" Jan 28 12:00:03 crc kubenswrapper[4804]: I0128 12:00:03.819419 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr"] Jan 28 12:00:03 crc kubenswrapper[4804]: I0128 12:00:03.842847 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493315-jjvdr"] Jan 28 12:00:04 crc kubenswrapper[4804]: I0128 12:00:04.923760 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae7433f6-40cb-4caf-8356-10bb93645af5" path="/var/lib/kubelet/pods/ae7433f6-40cb-4caf-8356-10bb93645af5/volumes" Jan 28 12:00:11 crc kubenswrapper[4804]: I0128 12:00:11.915671 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:00:11 crc kubenswrapper[4804]: E0128 12:00:11.916453 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:00:21 crc kubenswrapper[4804]: I0128 12:00:21.408425 4804 scope.go:117] "RemoveContainer" containerID="cc0257ab63b8ce14bac812eeb4ebcfe9baa7187c37d0e2df6e719355693b5895" Jan 28 12:00:25 crc kubenswrapper[4804]: I0128 12:00:25.915426 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:00:25 crc kubenswrapper[4804]: E0128 12:00:25.916400 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:00:39 crc kubenswrapper[4804]: I0128 12:00:39.915189 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:00:39 crc kubenswrapper[4804]: E0128 12:00:39.915903 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:00:52 crc kubenswrapper[4804]: I0128 12:00:52.915058 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:00:52 crc kubenswrapper[4804]: E0128 12:00:52.915760 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:01:06 crc kubenswrapper[4804]: I0128 12:01:06.915677 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:01:06 crc kubenswrapper[4804]: E0128 12:01:06.916510 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:01:20 crc kubenswrapper[4804]: I0128 12:01:20.914937 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:01:20 crc kubenswrapper[4804]: E0128 12:01:20.915661 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:01:32 crc kubenswrapper[4804]: I0128 12:01:32.915580 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:01:32 crc kubenswrapper[4804]: E0128 12:01:32.916550 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:01:45 crc kubenswrapper[4804]: I0128 12:01:45.915601 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:01:45 crc kubenswrapper[4804]: E0128 12:01:45.916340 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:01:58 crc kubenswrapper[4804]: I0128 12:01:58.914761 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:01:58 crc kubenswrapper[4804]: E0128 12:01:58.915250 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:02:13 crc kubenswrapper[4804]: I0128 12:02:13.915504 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:02:13 crc kubenswrapper[4804]: E0128 12:02:13.917962 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:02:25 crc kubenswrapper[4804]: I0128 12:02:25.914815 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:02:25 crc kubenswrapper[4804]: E0128 12:02:25.915446 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:02:39 crc kubenswrapper[4804]: I0128 12:02:39.915467 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:02:39 crc kubenswrapper[4804]: E0128 12:02:39.916193 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:02:50 crc kubenswrapper[4804]: I0128 12:02:50.914907 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:02:50 crc kubenswrapper[4804]: E0128 12:02:50.916530 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:03:04 crc kubenswrapper[4804]: I0128 12:03:04.919964 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:03:04 crc kubenswrapper[4804]: E0128 12:03:04.920851 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:03:17 crc kubenswrapper[4804]: I0128 12:03:17.914982 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:03:17 crc kubenswrapper[4804]: E0128 12:03:17.915725 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:03:30 crc kubenswrapper[4804]: I0128 12:03:30.915557 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:03:30 crc kubenswrapper[4804]: E0128 12:03:30.916732 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:03:43 crc kubenswrapper[4804]: I0128 12:03:43.915751 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:03:43 crc kubenswrapper[4804]: E0128 12:03:43.916806 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:03:54 crc kubenswrapper[4804]: I0128 12:03:54.928228 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:03:54 crc kubenswrapper[4804]: E0128 12:03:54.931678 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:04:07 crc kubenswrapper[4804]: I0128 12:04:07.915095 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:04:07 crc kubenswrapper[4804]: E0128 12:04:07.916489 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:04:22 crc kubenswrapper[4804]: I0128 12:04:22.914709 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:04:22 crc kubenswrapper[4804]: E0128 12:04:22.915741 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:04:35 crc kubenswrapper[4804]: I0128 12:04:35.915665 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:04:35 crc kubenswrapper[4804]: E0128 12:04:35.916314 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:04:47 crc kubenswrapper[4804]: I0128 12:04:47.914752 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:04:48 crc kubenswrapper[4804]: I0128 12:04:48.348444 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"566a846233c878f366e78fe6b0eb0d66e44ee38fa76277e0cb50bdcb6cf30c2e"} Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.652190 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5wsqw"] Jan 28 12:06:28 crc kubenswrapper[4804]: E0128 12:06:28.653431 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aa991fe4-fe41-454b-b0ab-03e5d7a546d7" containerName="collect-profiles" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.653447 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="aa991fe4-fe41-454b-b0ab-03e5d7a546d7" containerName="collect-profiles" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.653653 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="aa991fe4-fe41-454b-b0ab-03e5d7a546d7" containerName="collect-profiles" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.654843 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.675615 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5wsqw"] Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.791017 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gddg\" (UniqueName: \"kubernetes.io/projected/f3b04bfb-68de-453f-b4d7-5000680da6ea-kube-api-access-5gddg\") pod \"community-operators-5wsqw\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.791130 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-utilities\") pod \"community-operators-5wsqw\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.791160 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-catalog-content\") pod \"community-operators-5wsqw\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.893754 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5gddg\" (UniqueName: \"kubernetes.io/projected/f3b04bfb-68de-453f-b4d7-5000680da6ea-kube-api-access-5gddg\") pod \"community-operators-5wsqw\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.893891 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-utilities\") pod \"community-operators-5wsqw\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.893924 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-catalog-content\") pod \"community-operators-5wsqw\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.894496 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-catalog-content\") pod \"community-operators-5wsqw\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.894754 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-utilities\") pod \"community-operators-5wsqw\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.915839 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5gddg\" (UniqueName: \"kubernetes.io/projected/f3b04bfb-68de-453f-b4d7-5000680da6ea-kube-api-access-5gddg\") pod \"community-operators-5wsqw\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:28 crc kubenswrapper[4804]: I0128 12:06:28.976190 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:29 crc kubenswrapper[4804]: I0128 12:06:29.545023 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5wsqw"] Jan 28 12:06:30 crc kubenswrapper[4804]: I0128 12:06:30.056527 4804 generic.go:334] "Generic (PLEG): container finished" podID="f3b04bfb-68de-453f-b4d7-5000680da6ea" containerID="9042fc716439510c3153f2bb0f294a733acce1f35063251d32c632ee7be9ca8b" exitCode=0 Jan 28 12:06:30 crc kubenswrapper[4804]: I0128 12:06:30.056642 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wsqw" event={"ID":"f3b04bfb-68de-453f-b4d7-5000680da6ea","Type":"ContainerDied","Data":"9042fc716439510c3153f2bb0f294a733acce1f35063251d32c632ee7be9ca8b"} Jan 28 12:06:30 crc kubenswrapper[4804]: I0128 12:06:30.056870 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wsqw" event={"ID":"f3b04bfb-68de-453f-b4d7-5000680da6ea","Type":"ContainerStarted","Data":"5407d7ab986b7fae2d51fbd84ff48673cbe5fbf1ed74c93e833605b4bda3b44a"} Jan 28 12:06:30 crc kubenswrapper[4804]: I0128 12:06:30.058519 4804 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 12:06:32 crc kubenswrapper[4804]: I0128 12:06:32.075523 4804 generic.go:334] "Generic (PLEG): container finished" podID="f3b04bfb-68de-453f-b4d7-5000680da6ea" containerID="4b9884500b084f5cc0470685f366c5a4bd37f56fcc25d9a3688acabed5002340" exitCode=0 Jan 28 12:06:32 crc kubenswrapper[4804]: I0128 12:06:32.075607 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wsqw" event={"ID":"f3b04bfb-68de-453f-b4d7-5000680da6ea","Type":"ContainerDied","Data":"4b9884500b084f5cc0470685f366c5a4bd37f56fcc25d9a3688acabed5002340"} Jan 28 12:06:33 crc kubenswrapper[4804]: I0128 12:06:33.083577 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wsqw" event={"ID":"f3b04bfb-68de-453f-b4d7-5000680da6ea","Type":"ContainerStarted","Data":"b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04"} Jan 28 12:06:33 crc kubenswrapper[4804]: I0128 12:06:33.104708 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5wsqw" podStartSLOduration=2.65521039 podStartE2EDuration="5.10468501s" podCreationTimestamp="2026-01-28 12:06:28 +0000 UTC" firstStartedPulling="2026-01-28 12:06:30.058276551 +0000 UTC m=+2665.853156535" lastFinishedPulling="2026-01-28 12:06:32.507751171 +0000 UTC m=+2668.302631155" observedRunningTime="2026-01-28 12:06:33.101554563 +0000 UTC m=+2668.896434547" watchObservedRunningTime="2026-01-28 12:06:33.10468501 +0000 UTC m=+2668.899564994" Jan 28 12:06:38 crc kubenswrapper[4804]: I0128 12:06:38.977089 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:38 crc kubenswrapper[4804]: I0128 12:06:38.977694 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:39 crc kubenswrapper[4804]: I0128 12:06:39.027415 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:39 crc kubenswrapper[4804]: I0128 12:06:39.166485 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:39 crc kubenswrapper[4804]: I0128 12:06:39.254258 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5wsqw"] Jan 28 12:06:41 crc kubenswrapper[4804]: I0128 12:06:41.139199 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-5wsqw" podUID="f3b04bfb-68de-453f-b4d7-5000680da6ea" containerName="registry-server" containerID="cri-o://b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04" gracePeriod=2 Jan 28 12:06:41 crc kubenswrapper[4804]: I0128 12:06:41.532637 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:41 crc kubenswrapper[4804]: I0128 12:06:41.714288 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-catalog-content\") pod \"f3b04bfb-68de-453f-b4d7-5000680da6ea\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " Jan 28 12:06:41 crc kubenswrapper[4804]: I0128 12:06:41.714346 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5gddg\" (UniqueName: \"kubernetes.io/projected/f3b04bfb-68de-453f-b4d7-5000680da6ea-kube-api-access-5gddg\") pod \"f3b04bfb-68de-453f-b4d7-5000680da6ea\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " Jan 28 12:06:41 crc kubenswrapper[4804]: I0128 12:06:41.714430 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-utilities\") pod \"f3b04bfb-68de-453f-b4d7-5000680da6ea\" (UID: \"f3b04bfb-68de-453f-b4d7-5000680da6ea\") " Jan 28 12:06:41 crc kubenswrapper[4804]: I0128 12:06:41.715276 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-utilities" (OuterVolumeSpecName: "utilities") pod "f3b04bfb-68de-453f-b4d7-5000680da6ea" (UID: "f3b04bfb-68de-453f-b4d7-5000680da6ea"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:06:41 crc kubenswrapper[4804]: I0128 12:06:41.723072 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3b04bfb-68de-453f-b4d7-5000680da6ea-kube-api-access-5gddg" (OuterVolumeSpecName: "kube-api-access-5gddg") pod "f3b04bfb-68de-453f-b4d7-5000680da6ea" (UID: "f3b04bfb-68de-453f-b4d7-5000680da6ea"). InnerVolumeSpecName "kube-api-access-5gddg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:06:41 crc kubenswrapper[4804]: I0128 12:06:41.815500 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5gddg\" (UniqueName: \"kubernetes.io/projected/f3b04bfb-68de-453f-b4d7-5000680da6ea-kube-api-access-5gddg\") on node \"crc\" DevicePath \"\"" Jan 28 12:06:41 crc kubenswrapper[4804]: I0128 12:06:41.815538 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.147205 4804 generic.go:334] "Generic (PLEG): container finished" podID="f3b04bfb-68de-453f-b4d7-5000680da6ea" containerID="b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04" exitCode=0 Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.147248 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wsqw" event={"ID":"f3b04bfb-68de-453f-b4d7-5000680da6ea","Type":"ContainerDied","Data":"b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04"} Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.147276 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5wsqw" event={"ID":"f3b04bfb-68de-453f-b4d7-5000680da6ea","Type":"ContainerDied","Data":"5407d7ab986b7fae2d51fbd84ff48673cbe5fbf1ed74c93e833605b4bda3b44a"} Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.147292 4804 scope.go:117] "RemoveContainer" containerID="b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.147409 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5wsqw" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.169558 4804 scope.go:117] "RemoveContainer" containerID="4b9884500b084f5cc0470685f366c5a4bd37f56fcc25d9a3688acabed5002340" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.176460 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f3b04bfb-68de-453f-b4d7-5000680da6ea" (UID: "f3b04bfb-68de-453f-b4d7-5000680da6ea"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.190772 4804 scope.go:117] "RemoveContainer" containerID="9042fc716439510c3153f2bb0f294a733acce1f35063251d32c632ee7be9ca8b" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.210623 4804 scope.go:117] "RemoveContainer" containerID="b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04" Jan 28 12:06:42 crc kubenswrapper[4804]: E0128 12:06:42.211188 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04\": container with ID starting with b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04 not found: ID does not exist" containerID="b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.211238 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04"} err="failed to get container status \"b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04\": rpc error: code = NotFound desc = could not find container \"b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04\": container with ID starting with b9a37e7d8d779d11de789ab25960ddd4c77aa8d19252060347e0ec293af5bd04 not found: ID does not exist" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.211272 4804 scope.go:117] "RemoveContainer" containerID="4b9884500b084f5cc0470685f366c5a4bd37f56fcc25d9a3688acabed5002340" Jan 28 12:06:42 crc kubenswrapper[4804]: E0128 12:06:42.211670 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b9884500b084f5cc0470685f366c5a4bd37f56fcc25d9a3688acabed5002340\": container with ID starting with 4b9884500b084f5cc0470685f366c5a4bd37f56fcc25d9a3688acabed5002340 not found: ID does not exist" containerID="4b9884500b084f5cc0470685f366c5a4bd37f56fcc25d9a3688acabed5002340" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.211704 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b9884500b084f5cc0470685f366c5a4bd37f56fcc25d9a3688acabed5002340"} err="failed to get container status \"4b9884500b084f5cc0470685f366c5a4bd37f56fcc25d9a3688acabed5002340\": rpc error: code = NotFound desc = could not find container \"4b9884500b084f5cc0470685f366c5a4bd37f56fcc25d9a3688acabed5002340\": container with ID starting with 4b9884500b084f5cc0470685f366c5a4bd37f56fcc25d9a3688acabed5002340 not found: ID does not exist" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.211730 4804 scope.go:117] "RemoveContainer" containerID="9042fc716439510c3153f2bb0f294a733acce1f35063251d32c632ee7be9ca8b" Jan 28 12:06:42 crc kubenswrapper[4804]: E0128 12:06:42.212099 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9042fc716439510c3153f2bb0f294a733acce1f35063251d32c632ee7be9ca8b\": container with ID starting with 9042fc716439510c3153f2bb0f294a733acce1f35063251d32c632ee7be9ca8b not found: ID does not exist" containerID="9042fc716439510c3153f2bb0f294a733acce1f35063251d32c632ee7be9ca8b" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.212131 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9042fc716439510c3153f2bb0f294a733acce1f35063251d32c632ee7be9ca8b"} err="failed to get container status \"9042fc716439510c3153f2bb0f294a733acce1f35063251d32c632ee7be9ca8b\": rpc error: code = NotFound desc = could not find container \"9042fc716439510c3153f2bb0f294a733acce1f35063251d32c632ee7be9ca8b\": container with ID starting with 9042fc716439510c3153f2bb0f294a733acce1f35063251d32c632ee7be9ca8b not found: ID does not exist" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.220281 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f3b04bfb-68de-453f-b4d7-5000680da6ea-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.481973 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-5wsqw"] Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.487906 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-5wsqw"] Jan 28 12:06:42 crc kubenswrapper[4804]: I0128 12:06:42.925960 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3b04bfb-68de-453f-b4d7-5000680da6ea" path="/var/lib/kubelet/pods/f3b04bfb-68de-453f-b4d7-5000680da6ea/volumes" Jan 28 12:07:12 crc kubenswrapper[4804]: I0128 12:07:12.582027 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:07:12 crc kubenswrapper[4804]: I0128 12:07:12.582637 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:07:42 crc kubenswrapper[4804]: I0128 12:07:42.582757 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:07:42 crc kubenswrapper[4804]: I0128 12:07:42.584412 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.234533 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pcp4d"] Jan 28 12:07:51 crc kubenswrapper[4804]: E0128 12:07:51.236230 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3b04bfb-68de-453f-b4d7-5000680da6ea" containerName="extract-content" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.236260 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3b04bfb-68de-453f-b4d7-5000680da6ea" containerName="extract-content" Jan 28 12:07:51 crc kubenswrapper[4804]: E0128 12:07:51.236270 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3b04bfb-68de-453f-b4d7-5000680da6ea" containerName="registry-server" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.236277 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3b04bfb-68de-453f-b4d7-5000680da6ea" containerName="registry-server" Jan 28 12:07:51 crc kubenswrapper[4804]: E0128 12:07:51.236291 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3b04bfb-68de-453f-b4d7-5000680da6ea" containerName="extract-utilities" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.236297 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3b04bfb-68de-453f-b4d7-5000680da6ea" containerName="extract-utilities" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.239089 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3b04bfb-68de-453f-b4d7-5000680da6ea" containerName="registry-server" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.240387 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pcp4d"] Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.240486 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.321998 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t64fs\" (UniqueName: \"kubernetes.io/projected/182a1540-9bf9-4275-bed6-695b4543de27-kube-api-access-t64fs\") pod \"certified-operators-pcp4d\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.322405 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-utilities\") pod \"certified-operators-pcp4d\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.322461 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-catalog-content\") pod \"certified-operators-pcp4d\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.423822 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t64fs\" (UniqueName: \"kubernetes.io/projected/182a1540-9bf9-4275-bed6-695b4543de27-kube-api-access-t64fs\") pod \"certified-operators-pcp4d\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.423898 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-utilities\") pod \"certified-operators-pcp4d\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.423981 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-catalog-content\") pod \"certified-operators-pcp4d\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.424324 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-utilities\") pod \"certified-operators-pcp4d\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.424475 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-catalog-content\") pod \"certified-operators-pcp4d\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.444655 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t64fs\" (UniqueName: \"kubernetes.io/projected/182a1540-9bf9-4275-bed6-695b4543de27-kube-api-access-t64fs\") pod \"certified-operators-pcp4d\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:07:51 crc kubenswrapper[4804]: I0128 12:07:51.561423 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:07:52 crc kubenswrapper[4804]: I0128 12:07:52.081781 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pcp4d"] Jan 28 12:07:52 crc kubenswrapper[4804]: I0128 12:07:52.655760 4804 generic.go:334] "Generic (PLEG): container finished" podID="182a1540-9bf9-4275-bed6-695b4543de27" containerID="a71b6f9b51dffbcffad9314284b06d82e287562aac7417dd8febf0f776f8980c" exitCode=0 Jan 28 12:07:52 crc kubenswrapper[4804]: I0128 12:07:52.655849 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcp4d" event={"ID":"182a1540-9bf9-4275-bed6-695b4543de27","Type":"ContainerDied","Data":"a71b6f9b51dffbcffad9314284b06d82e287562aac7417dd8febf0f776f8980c"} Jan 28 12:07:52 crc kubenswrapper[4804]: I0128 12:07:52.655922 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcp4d" event={"ID":"182a1540-9bf9-4275-bed6-695b4543de27","Type":"ContainerStarted","Data":"32cc816914e97792913704f608113a3fb2772fa49fbca79479c33e5a8e0aba84"} Jan 28 12:07:54 crc kubenswrapper[4804]: I0128 12:07:54.671431 4804 generic.go:334] "Generic (PLEG): container finished" podID="182a1540-9bf9-4275-bed6-695b4543de27" containerID="0ab57f34127d6e5ddf202553f4aa8e0c08737fc632afd9a84252f664c8415907" exitCode=0 Jan 28 12:07:54 crc kubenswrapper[4804]: I0128 12:07:54.671503 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcp4d" event={"ID":"182a1540-9bf9-4275-bed6-695b4543de27","Type":"ContainerDied","Data":"0ab57f34127d6e5ddf202553f4aa8e0c08737fc632afd9a84252f664c8415907"} Jan 28 12:07:55 crc kubenswrapper[4804]: I0128 12:07:55.681562 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcp4d" event={"ID":"182a1540-9bf9-4275-bed6-695b4543de27","Type":"ContainerStarted","Data":"83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7"} Jan 28 12:07:55 crc kubenswrapper[4804]: I0128 12:07:55.702003 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pcp4d" podStartSLOduration=2.213596177 podStartE2EDuration="4.701984334s" podCreationTimestamp="2026-01-28 12:07:51 +0000 UTC" firstStartedPulling="2026-01-28 12:07:52.662544933 +0000 UTC m=+2748.457424917" lastFinishedPulling="2026-01-28 12:07:55.15093309 +0000 UTC m=+2750.945813074" observedRunningTime="2026-01-28 12:07:55.696906227 +0000 UTC m=+2751.491786211" watchObservedRunningTime="2026-01-28 12:07:55.701984334 +0000 UTC m=+2751.496864318" Jan 28 12:08:01 crc kubenswrapper[4804]: I0128 12:08:01.562014 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:08:01 crc kubenswrapper[4804]: I0128 12:08:01.562647 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:08:01 crc kubenswrapper[4804]: I0128 12:08:01.619460 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:08:01 crc kubenswrapper[4804]: I0128 12:08:01.751756 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:08:01 crc kubenswrapper[4804]: I0128 12:08:01.851249 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pcp4d"] Jan 28 12:08:03 crc kubenswrapper[4804]: I0128 12:08:03.728465 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pcp4d" podUID="182a1540-9bf9-4275-bed6-695b4543de27" containerName="registry-server" containerID="cri-o://83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7" gracePeriod=2 Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.118452 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.213630 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-catalog-content\") pod \"182a1540-9bf9-4275-bed6-695b4543de27\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.213737 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t64fs\" (UniqueName: \"kubernetes.io/projected/182a1540-9bf9-4275-bed6-695b4543de27-kube-api-access-t64fs\") pod \"182a1540-9bf9-4275-bed6-695b4543de27\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.213849 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-utilities\") pod \"182a1540-9bf9-4275-bed6-695b4543de27\" (UID: \"182a1540-9bf9-4275-bed6-695b4543de27\") " Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.214737 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-utilities" (OuterVolumeSpecName: "utilities") pod "182a1540-9bf9-4275-bed6-695b4543de27" (UID: "182a1540-9bf9-4275-bed6-695b4543de27"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.219158 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/182a1540-9bf9-4275-bed6-695b4543de27-kube-api-access-t64fs" (OuterVolumeSpecName: "kube-api-access-t64fs") pod "182a1540-9bf9-4275-bed6-695b4543de27" (UID: "182a1540-9bf9-4275-bed6-695b4543de27"). InnerVolumeSpecName "kube-api-access-t64fs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.269839 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "182a1540-9bf9-4275-bed6-695b4543de27" (UID: "182a1540-9bf9-4275-bed6-695b4543de27"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.314828 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.315203 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/182a1540-9bf9-4275-bed6-695b4543de27-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.315219 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t64fs\" (UniqueName: \"kubernetes.io/projected/182a1540-9bf9-4275-bed6-695b4543de27-kube-api-access-t64fs\") on node \"crc\" DevicePath \"\"" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.737254 4804 generic.go:334] "Generic (PLEG): container finished" podID="182a1540-9bf9-4275-bed6-695b4543de27" containerID="83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7" exitCode=0 Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.737300 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcp4d" event={"ID":"182a1540-9bf9-4275-bed6-695b4543de27","Type":"ContainerDied","Data":"83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7"} Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.737359 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pcp4d" event={"ID":"182a1540-9bf9-4275-bed6-695b4543de27","Type":"ContainerDied","Data":"32cc816914e97792913704f608113a3fb2772fa49fbca79479c33e5a8e0aba84"} Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.737365 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pcp4d" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.737378 4804 scope.go:117] "RemoveContainer" containerID="83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.756739 4804 scope.go:117] "RemoveContainer" containerID="0ab57f34127d6e5ddf202553f4aa8e0c08737fc632afd9a84252f664c8415907" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.771218 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pcp4d"] Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.776538 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pcp4d"] Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.804072 4804 scope.go:117] "RemoveContainer" containerID="a71b6f9b51dffbcffad9314284b06d82e287562aac7417dd8febf0f776f8980c" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.821743 4804 scope.go:117] "RemoveContainer" containerID="83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7" Jan 28 12:08:04 crc kubenswrapper[4804]: E0128 12:08:04.822466 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7\": container with ID starting with 83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7 not found: ID does not exist" containerID="83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.822521 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7"} err="failed to get container status \"83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7\": rpc error: code = NotFound desc = could not find container \"83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7\": container with ID starting with 83e3c39b81527e7891440a2333430bcbd8bd7b1a5a1e53565100a947e26371f7 not found: ID does not exist" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.822555 4804 scope.go:117] "RemoveContainer" containerID="0ab57f34127d6e5ddf202553f4aa8e0c08737fc632afd9a84252f664c8415907" Jan 28 12:08:04 crc kubenswrapper[4804]: E0128 12:08:04.823016 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ab57f34127d6e5ddf202553f4aa8e0c08737fc632afd9a84252f664c8415907\": container with ID starting with 0ab57f34127d6e5ddf202553f4aa8e0c08737fc632afd9a84252f664c8415907 not found: ID does not exist" containerID="0ab57f34127d6e5ddf202553f4aa8e0c08737fc632afd9a84252f664c8415907" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.823051 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ab57f34127d6e5ddf202553f4aa8e0c08737fc632afd9a84252f664c8415907"} err="failed to get container status \"0ab57f34127d6e5ddf202553f4aa8e0c08737fc632afd9a84252f664c8415907\": rpc error: code = NotFound desc = could not find container \"0ab57f34127d6e5ddf202553f4aa8e0c08737fc632afd9a84252f664c8415907\": container with ID starting with 0ab57f34127d6e5ddf202553f4aa8e0c08737fc632afd9a84252f664c8415907 not found: ID does not exist" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.823073 4804 scope.go:117] "RemoveContainer" containerID="a71b6f9b51dffbcffad9314284b06d82e287562aac7417dd8febf0f776f8980c" Jan 28 12:08:04 crc kubenswrapper[4804]: E0128 12:08:04.823372 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a71b6f9b51dffbcffad9314284b06d82e287562aac7417dd8febf0f776f8980c\": container with ID starting with a71b6f9b51dffbcffad9314284b06d82e287562aac7417dd8febf0f776f8980c not found: ID does not exist" containerID="a71b6f9b51dffbcffad9314284b06d82e287562aac7417dd8febf0f776f8980c" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.823416 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a71b6f9b51dffbcffad9314284b06d82e287562aac7417dd8febf0f776f8980c"} err="failed to get container status \"a71b6f9b51dffbcffad9314284b06d82e287562aac7417dd8febf0f776f8980c\": rpc error: code = NotFound desc = could not find container \"a71b6f9b51dffbcffad9314284b06d82e287562aac7417dd8febf0f776f8980c\": container with ID starting with a71b6f9b51dffbcffad9314284b06d82e287562aac7417dd8febf0f776f8980c not found: ID does not exist" Jan 28 12:08:04 crc kubenswrapper[4804]: I0128 12:08:04.925256 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="182a1540-9bf9-4275-bed6-695b4543de27" path="/var/lib/kubelet/pods/182a1540-9bf9-4275-bed6-695b4543de27/volumes" Jan 28 12:08:12 crc kubenswrapper[4804]: I0128 12:08:12.582692 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:08:12 crc kubenswrapper[4804]: I0128 12:08:12.583248 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:08:12 crc kubenswrapper[4804]: I0128 12:08:12.583296 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 12:08:12 crc kubenswrapper[4804]: I0128 12:08:12.583932 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"566a846233c878f366e78fe6b0eb0d66e44ee38fa76277e0cb50bdcb6cf30c2e"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 12:08:12 crc kubenswrapper[4804]: I0128 12:08:12.583989 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://566a846233c878f366e78fe6b0eb0d66e44ee38fa76277e0cb50bdcb6cf30c2e" gracePeriod=600 Jan 28 12:08:12 crc kubenswrapper[4804]: I0128 12:08:12.784856 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="566a846233c878f366e78fe6b0eb0d66e44ee38fa76277e0cb50bdcb6cf30c2e" exitCode=0 Jan 28 12:08:12 crc kubenswrapper[4804]: I0128 12:08:12.784916 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"566a846233c878f366e78fe6b0eb0d66e44ee38fa76277e0cb50bdcb6cf30c2e"} Jan 28 12:08:12 crc kubenswrapper[4804]: I0128 12:08:12.784997 4804 scope.go:117] "RemoveContainer" containerID="aaa52319e37315d536ef86475e1561a6d0416fadd59474a09b23007b84d33db2" Jan 28 12:08:13 crc kubenswrapper[4804]: I0128 12:08:13.793078 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d"} Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.737223 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-l5fsc"] Jan 28 12:09:23 crc kubenswrapper[4804]: E0128 12:09:23.738080 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="182a1540-9bf9-4275-bed6-695b4543de27" containerName="extract-content" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.738091 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="182a1540-9bf9-4275-bed6-695b4543de27" containerName="extract-content" Jan 28 12:09:23 crc kubenswrapper[4804]: E0128 12:09:23.738109 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="182a1540-9bf9-4275-bed6-695b4543de27" containerName="extract-utilities" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.738115 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="182a1540-9bf9-4275-bed6-695b4543de27" containerName="extract-utilities" Jan 28 12:09:23 crc kubenswrapper[4804]: E0128 12:09:23.738125 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="182a1540-9bf9-4275-bed6-695b4543de27" containerName="registry-server" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.738131 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="182a1540-9bf9-4275-bed6-695b4543de27" containerName="registry-server" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.738260 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="182a1540-9bf9-4275-bed6-695b4543de27" containerName="registry-server" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.739278 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.754275 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5fsc"] Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.842834 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-catalog-content\") pod \"redhat-marketplace-l5fsc\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.843210 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-utilities\") pod \"redhat-marketplace-l5fsc\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.843255 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-748rr\" (UniqueName: \"kubernetes.io/projected/86faab76-d908-4b49-85bf-e5209af19052-kube-api-access-748rr\") pod \"redhat-marketplace-l5fsc\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.942679 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d7nv2"] Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.944578 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-catalog-content\") pod \"redhat-marketplace-l5fsc\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.944640 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-utilities\") pod \"redhat-marketplace-l5fsc\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.944679 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-748rr\" (UniqueName: \"kubernetes.io/projected/86faab76-d908-4b49-85bf-e5209af19052-kube-api-access-748rr\") pod \"redhat-marketplace-l5fsc\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.945068 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.946729 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-utilities\") pod \"redhat-marketplace-l5fsc\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.946939 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-catalog-content\") pod \"redhat-marketplace-l5fsc\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.957911 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d7nv2"] Jan 28 12:09:23 crc kubenswrapper[4804]: I0128 12:09:23.970014 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-748rr\" (UniqueName: \"kubernetes.io/projected/86faab76-d908-4b49-85bf-e5209af19052-kube-api-access-748rr\") pod \"redhat-marketplace-l5fsc\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.046398 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdkzh\" (UniqueName: \"kubernetes.io/projected/38e47861-4801-4654-b1df-0638f0e86369-kube-api-access-kdkzh\") pod \"redhat-operators-d7nv2\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.046477 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-catalog-content\") pod \"redhat-operators-d7nv2\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.046500 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-utilities\") pod \"redhat-operators-d7nv2\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.059221 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.149710 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdkzh\" (UniqueName: \"kubernetes.io/projected/38e47861-4801-4654-b1df-0638f0e86369-kube-api-access-kdkzh\") pod \"redhat-operators-d7nv2\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.149831 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-catalog-content\") pod \"redhat-operators-d7nv2\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.149864 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-utilities\") pod \"redhat-operators-d7nv2\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.150448 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-utilities\") pod \"redhat-operators-d7nv2\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.150497 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-catalog-content\") pod \"redhat-operators-d7nv2\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.173722 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdkzh\" (UniqueName: \"kubernetes.io/projected/38e47861-4801-4654-b1df-0638f0e86369-kube-api-access-kdkzh\") pod \"redhat-operators-d7nv2\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.257344 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.505079 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d7nv2"] Jan 28 12:09:24 crc kubenswrapper[4804]: I0128 12:09:24.518524 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5fsc"] Jan 28 12:09:24 crc kubenswrapper[4804]: W0128 12:09:24.530966 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86faab76_d908_4b49_85bf_e5209af19052.slice/crio-b3d630c7f4e2fb1ffe1922156c431b80caea5182c143c51692b83916473b16fc WatchSource:0}: Error finding container b3d630c7f4e2fb1ffe1922156c431b80caea5182c143c51692b83916473b16fc: Status 404 returned error can't find the container with id b3d630c7f4e2fb1ffe1922156c431b80caea5182c143c51692b83916473b16fc Jan 28 12:09:25 crc kubenswrapper[4804]: I0128 12:09:25.259061 4804 generic.go:334] "Generic (PLEG): container finished" podID="86faab76-d908-4b49-85bf-e5209af19052" containerID="cc92a11805ceb4911c903b54b64b75d8d58fb3becb06d83510781ad24f3ec462" exitCode=0 Jan 28 12:09:25 crc kubenswrapper[4804]: I0128 12:09:25.259216 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5fsc" event={"ID":"86faab76-d908-4b49-85bf-e5209af19052","Type":"ContainerDied","Data":"cc92a11805ceb4911c903b54b64b75d8d58fb3becb06d83510781ad24f3ec462"} Jan 28 12:09:25 crc kubenswrapper[4804]: I0128 12:09:25.259678 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5fsc" event={"ID":"86faab76-d908-4b49-85bf-e5209af19052","Type":"ContainerStarted","Data":"b3d630c7f4e2fb1ffe1922156c431b80caea5182c143c51692b83916473b16fc"} Jan 28 12:09:25 crc kubenswrapper[4804]: I0128 12:09:25.264995 4804 generic.go:334] "Generic (PLEG): container finished" podID="38e47861-4801-4654-b1df-0638f0e86369" containerID="e79ab104737ea2ea82ecdea5e53571ebdd8ca55439eb8a0d08df7c54d8ca6372" exitCode=0 Jan 28 12:09:25 crc kubenswrapper[4804]: I0128 12:09:25.265057 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7nv2" event={"ID":"38e47861-4801-4654-b1df-0638f0e86369","Type":"ContainerDied","Data":"e79ab104737ea2ea82ecdea5e53571ebdd8ca55439eb8a0d08df7c54d8ca6372"} Jan 28 12:09:25 crc kubenswrapper[4804]: I0128 12:09:25.265090 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7nv2" event={"ID":"38e47861-4801-4654-b1df-0638f0e86369","Type":"ContainerStarted","Data":"ae90a840031eaca3f34697b26474729a2d6ca70016fb2e230f6362627e5d39d2"} Jan 28 12:09:26 crc kubenswrapper[4804]: I0128 12:09:26.274057 4804 generic.go:334] "Generic (PLEG): container finished" podID="86faab76-d908-4b49-85bf-e5209af19052" containerID="de97b43b896c40ad5c89d27c79e57b6bfecb5b549e032f511a3f290fab9fda5b" exitCode=0 Jan 28 12:09:26 crc kubenswrapper[4804]: I0128 12:09:26.274169 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5fsc" event={"ID":"86faab76-d908-4b49-85bf-e5209af19052","Type":"ContainerDied","Data":"de97b43b896c40ad5c89d27c79e57b6bfecb5b549e032f511a3f290fab9fda5b"} Jan 28 12:09:26 crc kubenswrapper[4804]: I0128 12:09:26.281570 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7nv2" event={"ID":"38e47861-4801-4654-b1df-0638f0e86369","Type":"ContainerStarted","Data":"c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de"} Jan 28 12:09:27 crc kubenswrapper[4804]: I0128 12:09:27.293695 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5fsc" event={"ID":"86faab76-d908-4b49-85bf-e5209af19052","Type":"ContainerStarted","Data":"8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1"} Jan 28 12:09:27 crc kubenswrapper[4804]: I0128 12:09:27.295954 4804 generic.go:334] "Generic (PLEG): container finished" podID="38e47861-4801-4654-b1df-0638f0e86369" containerID="c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de" exitCode=0 Jan 28 12:09:27 crc kubenswrapper[4804]: I0128 12:09:27.295992 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7nv2" event={"ID":"38e47861-4801-4654-b1df-0638f0e86369","Type":"ContainerDied","Data":"c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de"} Jan 28 12:09:27 crc kubenswrapper[4804]: I0128 12:09:27.327806 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-l5fsc" podStartSLOduration=2.780763035 podStartE2EDuration="4.327765558s" podCreationTimestamp="2026-01-28 12:09:23 +0000 UTC" firstStartedPulling="2026-01-28 12:09:25.265596863 +0000 UTC m=+2841.060476847" lastFinishedPulling="2026-01-28 12:09:26.812599386 +0000 UTC m=+2842.607479370" observedRunningTime="2026-01-28 12:09:27.316796678 +0000 UTC m=+2843.111676662" watchObservedRunningTime="2026-01-28 12:09:27.327765558 +0000 UTC m=+2843.122645572" Jan 28 12:09:28 crc kubenswrapper[4804]: I0128 12:09:28.307372 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7nv2" event={"ID":"38e47861-4801-4654-b1df-0638f0e86369","Type":"ContainerStarted","Data":"b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506"} Jan 28 12:09:28 crc kubenswrapper[4804]: I0128 12:09:28.328971 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d7nv2" podStartSLOduration=2.908322672 podStartE2EDuration="5.328937577s" podCreationTimestamp="2026-01-28 12:09:23 +0000 UTC" firstStartedPulling="2026-01-28 12:09:25.267830762 +0000 UTC m=+2841.062710746" lastFinishedPulling="2026-01-28 12:09:27.688445627 +0000 UTC m=+2843.483325651" observedRunningTime="2026-01-28 12:09:28.327048708 +0000 UTC m=+2844.121928712" watchObservedRunningTime="2026-01-28 12:09:28.328937577 +0000 UTC m=+2844.123817571" Jan 28 12:09:34 crc kubenswrapper[4804]: I0128 12:09:34.059778 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:34 crc kubenswrapper[4804]: I0128 12:09:34.060664 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:34 crc kubenswrapper[4804]: I0128 12:09:34.118461 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:34 crc kubenswrapper[4804]: I0128 12:09:34.258112 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:34 crc kubenswrapper[4804]: I0128 12:09:34.258554 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:34 crc kubenswrapper[4804]: I0128 12:09:34.299131 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:34 crc kubenswrapper[4804]: I0128 12:09:34.386847 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:34 crc kubenswrapper[4804]: I0128 12:09:34.399654 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:34 crc kubenswrapper[4804]: I0128 12:09:34.926656 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5fsc"] Jan 28 12:09:35 crc kubenswrapper[4804]: I0128 12:09:35.125221 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d7nv2"] Jan 28 12:09:36 crc kubenswrapper[4804]: I0128 12:09:36.358763 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-l5fsc" podUID="86faab76-d908-4b49-85bf-e5209af19052" containerName="registry-server" containerID="cri-o://8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1" gracePeriod=2 Jan 28 12:09:36 crc kubenswrapper[4804]: I0128 12:09:36.358871 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d7nv2" podUID="38e47861-4801-4654-b1df-0638f0e86369" containerName="registry-server" containerID="cri-o://b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506" gracePeriod=2 Jan 28 12:09:36 crc kubenswrapper[4804]: I0128 12:09:36.885734 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:36 crc kubenswrapper[4804]: I0128 12:09:36.892995 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.035545 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-748rr\" (UniqueName: \"kubernetes.io/projected/86faab76-d908-4b49-85bf-e5209af19052-kube-api-access-748rr\") pod \"86faab76-d908-4b49-85bf-e5209af19052\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.035612 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-utilities\") pod \"38e47861-4801-4654-b1df-0638f0e86369\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.035631 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-catalog-content\") pod \"38e47861-4801-4654-b1df-0638f0e86369\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.035753 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-utilities\") pod \"86faab76-d908-4b49-85bf-e5209af19052\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.035779 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-catalog-content\") pod \"86faab76-d908-4b49-85bf-e5209af19052\" (UID: \"86faab76-d908-4b49-85bf-e5209af19052\") " Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.035801 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kdkzh\" (UniqueName: \"kubernetes.io/projected/38e47861-4801-4654-b1df-0638f0e86369-kube-api-access-kdkzh\") pod \"38e47861-4801-4654-b1df-0638f0e86369\" (UID: \"38e47861-4801-4654-b1df-0638f0e86369\") " Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.036486 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-utilities" (OuterVolumeSpecName: "utilities") pod "86faab76-d908-4b49-85bf-e5209af19052" (UID: "86faab76-d908-4b49-85bf-e5209af19052"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.036536 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-utilities" (OuterVolumeSpecName: "utilities") pod "38e47861-4801-4654-b1df-0638f0e86369" (UID: "38e47861-4801-4654-b1df-0638f0e86369"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.037614 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.037712 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.041028 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38e47861-4801-4654-b1df-0638f0e86369-kube-api-access-kdkzh" (OuterVolumeSpecName: "kube-api-access-kdkzh") pod "38e47861-4801-4654-b1df-0638f0e86369" (UID: "38e47861-4801-4654-b1df-0638f0e86369"). InnerVolumeSpecName "kube-api-access-kdkzh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.041119 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86faab76-d908-4b49-85bf-e5209af19052-kube-api-access-748rr" (OuterVolumeSpecName: "kube-api-access-748rr") pod "86faab76-d908-4b49-85bf-e5209af19052" (UID: "86faab76-d908-4b49-85bf-e5209af19052"). InnerVolumeSpecName "kube-api-access-748rr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.139464 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kdkzh\" (UniqueName: \"kubernetes.io/projected/38e47861-4801-4654-b1df-0638f0e86369-kube-api-access-kdkzh\") on node \"crc\" DevicePath \"\"" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.139503 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-748rr\" (UniqueName: \"kubernetes.io/projected/86faab76-d908-4b49-85bf-e5209af19052-kube-api-access-748rr\") on node \"crc\" DevicePath \"\"" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.367534 4804 generic.go:334] "Generic (PLEG): container finished" podID="86faab76-d908-4b49-85bf-e5209af19052" containerID="8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1" exitCode=0 Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.367572 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5fsc" event={"ID":"86faab76-d908-4b49-85bf-e5209af19052","Type":"ContainerDied","Data":"8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1"} Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.367617 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-l5fsc" event={"ID":"86faab76-d908-4b49-85bf-e5209af19052","Type":"ContainerDied","Data":"b3d630c7f4e2fb1ffe1922156c431b80caea5182c143c51692b83916473b16fc"} Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.367649 4804 scope.go:117] "RemoveContainer" containerID="8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.369365 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-l5fsc" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.372039 4804 generic.go:334] "Generic (PLEG): container finished" podID="38e47861-4801-4654-b1df-0638f0e86369" containerID="b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506" exitCode=0 Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.372162 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7nv2" event={"ID":"38e47861-4801-4654-b1df-0638f0e86369","Type":"ContainerDied","Data":"b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506"} Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.372344 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d7nv2" event={"ID":"38e47861-4801-4654-b1df-0638f0e86369","Type":"ContainerDied","Data":"ae90a840031eaca3f34697b26474729a2d6ca70016fb2e230f6362627e5d39d2"} Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.372213 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d7nv2" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.385815 4804 scope.go:117] "RemoveContainer" containerID="de97b43b896c40ad5c89d27c79e57b6bfecb5b549e032f511a3f290fab9fda5b" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.401011 4804 scope.go:117] "RemoveContainer" containerID="cc92a11805ceb4911c903b54b64b75d8d58fb3becb06d83510781ad24f3ec462" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.414797 4804 scope.go:117] "RemoveContainer" containerID="8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1" Jan 28 12:09:37 crc kubenswrapper[4804]: E0128 12:09:37.415123 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1\": container with ID starting with 8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1 not found: ID does not exist" containerID="8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.415164 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1"} err="failed to get container status \"8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1\": rpc error: code = NotFound desc = could not find container \"8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1\": container with ID starting with 8378ebc15a4fba11bf3c0e54d8ba8b56c543e13479617b97b536abf38c0dc3a1 not found: ID does not exist" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.415190 4804 scope.go:117] "RemoveContainer" containerID="de97b43b896c40ad5c89d27c79e57b6bfecb5b549e032f511a3f290fab9fda5b" Jan 28 12:09:37 crc kubenswrapper[4804]: E0128 12:09:37.415565 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"de97b43b896c40ad5c89d27c79e57b6bfecb5b549e032f511a3f290fab9fda5b\": container with ID starting with de97b43b896c40ad5c89d27c79e57b6bfecb5b549e032f511a3f290fab9fda5b not found: ID does not exist" containerID="de97b43b896c40ad5c89d27c79e57b6bfecb5b549e032f511a3f290fab9fda5b" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.415596 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"de97b43b896c40ad5c89d27c79e57b6bfecb5b549e032f511a3f290fab9fda5b"} err="failed to get container status \"de97b43b896c40ad5c89d27c79e57b6bfecb5b549e032f511a3f290fab9fda5b\": rpc error: code = NotFound desc = could not find container \"de97b43b896c40ad5c89d27c79e57b6bfecb5b549e032f511a3f290fab9fda5b\": container with ID starting with de97b43b896c40ad5c89d27c79e57b6bfecb5b549e032f511a3f290fab9fda5b not found: ID does not exist" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.415611 4804 scope.go:117] "RemoveContainer" containerID="cc92a11805ceb4911c903b54b64b75d8d58fb3becb06d83510781ad24f3ec462" Jan 28 12:09:37 crc kubenswrapper[4804]: E0128 12:09:37.415911 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc92a11805ceb4911c903b54b64b75d8d58fb3becb06d83510781ad24f3ec462\": container with ID starting with cc92a11805ceb4911c903b54b64b75d8d58fb3becb06d83510781ad24f3ec462 not found: ID does not exist" containerID="cc92a11805ceb4911c903b54b64b75d8d58fb3becb06d83510781ad24f3ec462" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.415938 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc92a11805ceb4911c903b54b64b75d8d58fb3becb06d83510781ad24f3ec462"} err="failed to get container status \"cc92a11805ceb4911c903b54b64b75d8d58fb3becb06d83510781ad24f3ec462\": rpc error: code = NotFound desc = could not find container \"cc92a11805ceb4911c903b54b64b75d8d58fb3becb06d83510781ad24f3ec462\": container with ID starting with cc92a11805ceb4911c903b54b64b75d8d58fb3becb06d83510781ad24f3ec462 not found: ID does not exist" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.415953 4804 scope.go:117] "RemoveContainer" containerID="b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.428178 4804 scope.go:117] "RemoveContainer" containerID="c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.442470 4804 scope.go:117] "RemoveContainer" containerID="e79ab104737ea2ea82ecdea5e53571ebdd8ca55439eb8a0d08df7c54d8ca6372" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.456032 4804 scope.go:117] "RemoveContainer" containerID="b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506" Jan 28 12:09:37 crc kubenswrapper[4804]: E0128 12:09:37.456401 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506\": container with ID starting with b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506 not found: ID does not exist" containerID="b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.456433 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506"} err="failed to get container status \"b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506\": rpc error: code = NotFound desc = could not find container \"b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506\": container with ID starting with b05eef4c3115d56c41df3967c4626ea7fa3851d401f8df8b919ccf6dbe280506 not found: ID does not exist" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.456458 4804 scope.go:117] "RemoveContainer" containerID="c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de" Jan 28 12:09:37 crc kubenswrapper[4804]: E0128 12:09:37.456821 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de\": container with ID starting with c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de not found: ID does not exist" containerID="c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.456839 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de"} err="failed to get container status \"c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de\": rpc error: code = NotFound desc = could not find container \"c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de\": container with ID starting with c3e5c53a61b3aed9eb07fce4d3ed14a6dede244e437c3c872dffd197868444de not found: ID does not exist" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.456854 4804 scope.go:117] "RemoveContainer" containerID="e79ab104737ea2ea82ecdea5e53571ebdd8ca55439eb8a0d08df7c54d8ca6372" Jan 28 12:09:37 crc kubenswrapper[4804]: E0128 12:09:37.457231 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e79ab104737ea2ea82ecdea5e53571ebdd8ca55439eb8a0d08df7c54d8ca6372\": container with ID starting with e79ab104737ea2ea82ecdea5e53571ebdd8ca55439eb8a0d08df7c54d8ca6372 not found: ID does not exist" containerID="e79ab104737ea2ea82ecdea5e53571ebdd8ca55439eb8a0d08df7c54d8ca6372" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.457254 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e79ab104737ea2ea82ecdea5e53571ebdd8ca55439eb8a0d08df7c54d8ca6372"} err="failed to get container status \"e79ab104737ea2ea82ecdea5e53571ebdd8ca55439eb8a0d08df7c54d8ca6372\": rpc error: code = NotFound desc = could not find container \"e79ab104737ea2ea82ecdea5e53571ebdd8ca55439eb8a0d08df7c54d8ca6372\": container with ID starting with e79ab104737ea2ea82ecdea5e53571ebdd8ca55439eb8a0d08df7c54d8ca6372 not found: ID does not exist" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.944802 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86faab76-d908-4b49-85bf-e5209af19052" (UID: "86faab76-d908-4b49-85bf-e5209af19052"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:09:37 crc kubenswrapper[4804]: I0128 12:09:37.949674 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86faab76-d908-4b49-85bf-e5209af19052-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:09:38 crc kubenswrapper[4804]: I0128 12:09:38.007626 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5fsc"] Jan 28 12:09:38 crc kubenswrapper[4804]: I0128 12:09:38.014722 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-l5fsc"] Jan 28 12:09:38 crc kubenswrapper[4804]: I0128 12:09:38.495145 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "38e47861-4801-4654-b1df-0638f0e86369" (UID: "38e47861-4801-4654-b1df-0638f0e86369"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:09:38 crc kubenswrapper[4804]: I0128 12:09:38.560004 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/38e47861-4801-4654-b1df-0638f0e86369-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:09:38 crc kubenswrapper[4804]: I0128 12:09:38.610139 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d7nv2"] Jan 28 12:09:38 crc kubenswrapper[4804]: I0128 12:09:38.616567 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d7nv2"] Jan 28 12:09:38 crc kubenswrapper[4804]: I0128 12:09:38.942217 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38e47861-4801-4654-b1df-0638f0e86369" path="/var/lib/kubelet/pods/38e47861-4801-4654-b1df-0638f0e86369/volumes" Jan 28 12:09:38 crc kubenswrapper[4804]: I0128 12:09:38.942940 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86faab76-d908-4b49-85bf-e5209af19052" path="/var/lib/kubelet/pods/86faab76-d908-4b49-85bf-e5209af19052/volumes" Jan 28 12:10:12 crc kubenswrapper[4804]: I0128 12:10:12.581676 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:10:12 crc kubenswrapper[4804]: I0128 12:10:12.582223 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:10:42 crc kubenswrapper[4804]: I0128 12:10:42.582091 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:10:42 crc kubenswrapper[4804]: I0128 12:10:42.582703 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:11:12 crc kubenswrapper[4804]: I0128 12:11:12.583085 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:11:12 crc kubenswrapper[4804]: I0128 12:11:12.583643 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:11:12 crc kubenswrapper[4804]: I0128 12:11:12.584113 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 12:11:12 crc kubenswrapper[4804]: I0128 12:11:12.584824 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 12:11:12 crc kubenswrapper[4804]: I0128 12:11:12.584877 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" gracePeriod=600 Jan 28 12:11:12 crc kubenswrapper[4804]: E0128 12:11:12.716909 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:11:13 crc kubenswrapper[4804]: I0128 12:11:13.069234 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" exitCode=0 Jan 28 12:11:13 crc kubenswrapper[4804]: I0128 12:11:13.069296 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d"} Jan 28 12:11:13 crc kubenswrapper[4804]: I0128 12:11:13.069330 4804 scope.go:117] "RemoveContainer" containerID="566a846233c878f366e78fe6b0eb0d66e44ee38fa76277e0cb50bdcb6cf30c2e" Jan 28 12:11:13 crc kubenswrapper[4804]: I0128 12:11:13.069706 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:11:13 crc kubenswrapper[4804]: E0128 12:11:13.069937 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:11:27 crc kubenswrapper[4804]: I0128 12:11:27.914969 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:11:27 crc kubenswrapper[4804]: E0128 12:11:27.915821 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:11:42 crc kubenswrapper[4804]: I0128 12:11:42.914839 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:11:42 crc kubenswrapper[4804]: E0128 12:11:42.916536 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:11:54 crc kubenswrapper[4804]: I0128 12:11:54.918399 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:11:54 crc kubenswrapper[4804]: E0128 12:11:54.920710 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:12:08 crc kubenswrapper[4804]: I0128 12:12:08.914924 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:12:08 crc kubenswrapper[4804]: E0128 12:12:08.915702 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:12:19 crc kubenswrapper[4804]: I0128 12:12:19.915141 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:12:19 crc kubenswrapper[4804]: E0128 12:12:19.915949 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:12:32 crc kubenswrapper[4804]: I0128 12:12:32.915712 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:12:32 crc kubenswrapper[4804]: E0128 12:12:32.916920 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:12:47 crc kubenswrapper[4804]: I0128 12:12:47.914817 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:12:47 crc kubenswrapper[4804]: E0128 12:12:47.915540 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:12:58 crc kubenswrapper[4804]: I0128 12:12:58.914845 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:12:58 crc kubenswrapper[4804]: E0128 12:12:58.915738 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:13:12 crc kubenswrapper[4804]: I0128 12:13:12.915158 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:13:12 crc kubenswrapper[4804]: E0128 12:13:12.916327 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:13:26 crc kubenswrapper[4804]: I0128 12:13:26.915324 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:13:26 crc kubenswrapper[4804]: E0128 12:13:26.916634 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:13:38 crc kubenswrapper[4804]: I0128 12:13:38.915825 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:13:38 crc kubenswrapper[4804]: E0128 12:13:38.916749 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:13:49 crc kubenswrapper[4804]: I0128 12:13:49.914727 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:13:49 crc kubenswrapper[4804]: E0128 12:13:49.915516 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:14:00 crc kubenswrapper[4804]: I0128 12:14:00.914843 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:14:00 crc kubenswrapper[4804]: E0128 12:14:00.916305 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:14:12 crc kubenswrapper[4804]: I0128 12:14:12.915724 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:14:12 crc kubenswrapper[4804]: E0128 12:14:12.916508 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:14:26 crc kubenswrapper[4804]: I0128 12:14:26.915068 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:14:26 crc kubenswrapper[4804]: E0128 12:14:26.915689 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:14:37 crc kubenswrapper[4804]: I0128 12:14:37.914949 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:14:37 crc kubenswrapper[4804]: E0128 12:14:37.915670 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:14:52 crc kubenswrapper[4804]: I0128 12:14:52.915724 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:14:52 crc kubenswrapper[4804]: E0128 12:14:52.916300 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.143087 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6"] Jan 28 12:15:00 crc kubenswrapper[4804]: E0128 12:15:00.146012 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86faab76-d908-4b49-85bf-e5209af19052" containerName="extract-utilities" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.146034 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="86faab76-d908-4b49-85bf-e5209af19052" containerName="extract-utilities" Jan 28 12:15:00 crc kubenswrapper[4804]: E0128 12:15:00.146052 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86faab76-d908-4b49-85bf-e5209af19052" containerName="extract-content" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.146061 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="86faab76-d908-4b49-85bf-e5209af19052" containerName="extract-content" Jan 28 12:15:00 crc kubenswrapper[4804]: E0128 12:15:00.146072 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e47861-4801-4654-b1df-0638f0e86369" containerName="extract-utilities" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.146080 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e47861-4801-4654-b1df-0638f0e86369" containerName="extract-utilities" Jan 28 12:15:00 crc kubenswrapper[4804]: E0128 12:15:00.146091 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e47861-4801-4654-b1df-0638f0e86369" containerName="registry-server" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.146099 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e47861-4801-4654-b1df-0638f0e86369" containerName="registry-server" Jan 28 12:15:00 crc kubenswrapper[4804]: E0128 12:15:00.146113 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="86faab76-d908-4b49-85bf-e5209af19052" containerName="registry-server" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.146120 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="86faab76-d908-4b49-85bf-e5209af19052" containerName="registry-server" Jan 28 12:15:00 crc kubenswrapper[4804]: E0128 12:15:00.146134 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="38e47861-4801-4654-b1df-0638f0e86369" containerName="extract-content" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.146144 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="38e47861-4801-4654-b1df-0638f0e86369" containerName="extract-content" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.146305 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="38e47861-4801-4654-b1df-0638f0e86369" containerName="registry-server" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.146331 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="86faab76-d908-4b49-85bf-e5209af19052" containerName="registry-server" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.149027 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.150806 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6"] Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.150993 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.151077 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.341461 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd5d65a2-f669-4c73-a215-c2cc62d5642f-secret-volume\") pod \"collect-profiles-29493375-jzcz6\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.341792 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd5d65a2-f669-4c73-a215-c2cc62d5642f-config-volume\") pod \"collect-profiles-29493375-jzcz6\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.341813 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kj9fv\" (UniqueName: \"kubernetes.io/projected/cd5d65a2-f669-4c73-a215-c2cc62d5642f-kube-api-access-kj9fv\") pod \"collect-profiles-29493375-jzcz6\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.442672 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd5d65a2-f669-4c73-a215-c2cc62d5642f-secret-volume\") pod \"collect-profiles-29493375-jzcz6\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.442972 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd5d65a2-f669-4c73-a215-c2cc62d5642f-config-volume\") pod \"collect-profiles-29493375-jzcz6\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.443091 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kj9fv\" (UniqueName: \"kubernetes.io/projected/cd5d65a2-f669-4c73-a215-c2cc62d5642f-kube-api-access-kj9fv\") pod \"collect-profiles-29493375-jzcz6\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.443997 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd5d65a2-f669-4c73-a215-c2cc62d5642f-config-volume\") pod \"collect-profiles-29493375-jzcz6\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.449765 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd5d65a2-f669-4c73-a215-c2cc62d5642f-secret-volume\") pod \"collect-profiles-29493375-jzcz6\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.459447 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kj9fv\" (UniqueName: \"kubernetes.io/projected/cd5d65a2-f669-4c73-a215-c2cc62d5642f-kube-api-access-kj9fv\") pod \"collect-profiles-29493375-jzcz6\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.472129 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:00 crc kubenswrapper[4804]: I0128 12:15:00.864306 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6"] Jan 28 12:15:01 crc kubenswrapper[4804]: I0128 12:15:01.778334 4804 generic.go:334] "Generic (PLEG): container finished" podID="cd5d65a2-f669-4c73-a215-c2cc62d5642f" containerID="c55fea870228d7c60c4dcade51769d821b6a45662de1315a0385d6343440a705" exitCode=0 Jan 28 12:15:01 crc kubenswrapper[4804]: I0128 12:15:01.778385 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" event={"ID":"cd5d65a2-f669-4c73-a215-c2cc62d5642f","Type":"ContainerDied","Data":"c55fea870228d7c60c4dcade51769d821b6a45662de1315a0385d6343440a705"} Jan 28 12:15:01 crc kubenswrapper[4804]: I0128 12:15:01.778415 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" event={"ID":"cd5d65a2-f669-4c73-a215-c2cc62d5642f","Type":"ContainerStarted","Data":"1bcbd689d8193d64f53f494587e7dfb627add9696efda821d34abe4a7d007353"} Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.034486 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.084107 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd5d65a2-f669-4c73-a215-c2cc62d5642f-config-volume\") pod \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.084230 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd5d65a2-f669-4c73-a215-c2cc62d5642f-secret-volume\") pod \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.084270 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kj9fv\" (UniqueName: \"kubernetes.io/projected/cd5d65a2-f669-4c73-a215-c2cc62d5642f-kube-api-access-kj9fv\") pod \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\" (UID: \"cd5d65a2-f669-4c73-a215-c2cc62d5642f\") " Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.085543 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cd5d65a2-f669-4c73-a215-c2cc62d5642f-config-volume" (OuterVolumeSpecName: "config-volume") pod "cd5d65a2-f669-4c73-a215-c2cc62d5642f" (UID: "cd5d65a2-f669-4c73-a215-c2cc62d5642f"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.089647 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd5d65a2-f669-4c73-a215-c2cc62d5642f-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "cd5d65a2-f669-4c73-a215-c2cc62d5642f" (UID: "cd5d65a2-f669-4c73-a215-c2cc62d5642f"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.089678 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd5d65a2-f669-4c73-a215-c2cc62d5642f-kube-api-access-kj9fv" (OuterVolumeSpecName: "kube-api-access-kj9fv") pod "cd5d65a2-f669-4c73-a215-c2cc62d5642f" (UID: "cd5d65a2-f669-4c73-a215-c2cc62d5642f"). InnerVolumeSpecName "kube-api-access-kj9fv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.185812 4804 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/cd5d65a2-f669-4c73-a215-c2cc62d5642f-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.185846 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kj9fv\" (UniqueName: \"kubernetes.io/projected/cd5d65a2-f669-4c73-a215-c2cc62d5642f-kube-api-access-kj9fv\") on node \"crc\" DevicePath \"\"" Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.185855 4804 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cd5d65a2-f669-4c73-a215-c2cc62d5642f-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.797692 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.797765 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493375-jzcz6" event={"ID":"cd5d65a2-f669-4c73-a215-c2cc62d5642f","Type":"ContainerDied","Data":"1bcbd689d8193d64f53f494587e7dfb627add9696efda821d34abe4a7d007353"} Jan 28 12:15:03 crc kubenswrapper[4804]: I0128 12:15:03.798321 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bcbd689d8193d64f53f494587e7dfb627add9696efda821d34abe4a7d007353" Jan 28 12:15:04 crc kubenswrapper[4804]: I0128 12:15:04.099282 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5"] Jan 28 12:15:04 crc kubenswrapper[4804]: I0128 12:15:04.104074 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493330-gcdc5"] Jan 28 12:15:04 crc kubenswrapper[4804]: I0128 12:15:04.926074 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83929dab-2f27-41a0-aaea-ec500ff4b6e7" path="/var/lib/kubelet/pods/83929dab-2f27-41a0-aaea-ec500ff4b6e7/volumes" Jan 28 12:15:05 crc kubenswrapper[4804]: I0128 12:15:05.914919 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:15:05 crc kubenswrapper[4804]: E0128 12:15:05.915159 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:15:18 crc kubenswrapper[4804]: I0128 12:15:18.914919 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:15:18 crc kubenswrapper[4804]: E0128 12:15:18.916031 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:15:21 crc kubenswrapper[4804]: I0128 12:15:21.685566 4804 scope.go:117] "RemoveContainer" containerID="647a49fa2b0ef181a7c4caad26f72973e736662092d9439165eb23246f60d551" Jan 28 12:15:31 crc kubenswrapper[4804]: I0128 12:15:31.915785 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:15:31 crc kubenswrapper[4804]: E0128 12:15:31.916567 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:15:42 crc kubenswrapper[4804]: I0128 12:15:42.916083 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:15:42 crc kubenswrapper[4804]: E0128 12:15:42.916988 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:15:53 crc kubenswrapper[4804]: I0128 12:15:53.915339 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:15:53 crc kubenswrapper[4804]: E0128 12:15:53.916111 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:16:08 crc kubenswrapper[4804]: I0128 12:16:08.918313 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:16:08 crc kubenswrapper[4804]: E0128 12:16:08.921382 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:16:21 crc kubenswrapper[4804]: I0128 12:16:21.915253 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:16:22 crc kubenswrapper[4804]: I0128 12:16:22.178431 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"bc09c3a58bfeacbb95f858a207ee4e75804e1451287317e8d420ed980a50ed4e"} Jan 28 12:17:47 crc kubenswrapper[4804]: I0128 12:17:47.959826 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-n96p5"] Jan 28 12:17:47 crc kubenswrapper[4804]: E0128 12:17:47.960855 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd5d65a2-f669-4c73-a215-c2cc62d5642f" containerName="collect-profiles" Jan 28 12:17:47 crc kubenswrapper[4804]: I0128 12:17:47.960913 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd5d65a2-f669-4c73-a215-c2cc62d5642f" containerName="collect-profiles" Jan 28 12:17:47 crc kubenswrapper[4804]: I0128 12:17:47.961166 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd5d65a2-f669-4c73-a215-c2cc62d5642f" containerName="collect-profiles" Jan 28 12:17:47 crc kubenswrapper[4804]: I0128 12:17:47.963910 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:47 crc kubenswrapper[4804]: I0128 12:17:47.968850 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n96p5"] Jan 28 12:17:48 crc kubenswrapper[4804]: I0128 12:17:48.102224 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f96td\" (UniqueName: \"kubernetes.io/projected/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-kube-api-access-f96td\") pod \"community-operators-n96p5\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:48 crc kubenswrapper[4804]: I0128 12:17:48.102398 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-utilities\") pod \"community-operators-n96p5\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:48 crc kubenswrapper[4804]: I0128 12:17:48.102457 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-catalog-content\") pod \"community-operators-n96p5\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:48 crc kubenswrapper[4804]: I0128 12:17:48.203562 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-utilities\") pod \"community-operators-n96p5\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:48 crc kubenswrapper[4804]: I0128 12:17:48.203624 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-catalog-content\") pod \"community-operators-n96p5\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:48 crc kubenswrapper[4804]: I0128 12:17:48.203705 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f96td\" (UniqueName: \"kubernetes.io/projected/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-kube-api-access-f96td\") pod \"community-operators-n96p5\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:48 crc kubenswrapper[4804]: I0128 12:17:48.204202 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-utilities\") pod \"community-operators-n96p5\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:48 crc kubenswrapper[4804]: I0128 12:17:48.204232 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-catalog-content\") pod \"community-operators-n96p5\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:48 crc kubenswrapper[4804]: I0128 12:17:48.221630 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f96td\" (UniqueName: \"kubernetes.io/projected/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-kube-api-access-f96td\") pod \"community-operators-n96p5\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:48 crc kubenswrapper[4804]: I0128 12:17:48.282245 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:48 crc kubenswrapper[4804]: I0128 12:17:48.805043 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-n96p5"] Jan 28 12:17:49 crc kubenswrapper[4804]: I0128 12:17:49.815368 4804 generic.go:334] "Generic (PLEG): container finished" podID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" containerID="2efbc03490ed43572699ec444996e43e335aa0f68aab150fa2a1ae8f5fa13a00" exitCode=0 Jan 28 12:17:49 crc kubenswrapper[4804]: I0128 12:17:49.815430 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n96p5" event={"ID":"227aeb2b-9a72-4194-8989-a1f38ed1c1fc","Type":"ContainerDied","Data":"2efbc03490ed43572699ec444996e43e335aa0f68aab150fa2a1ae8f5fa13a00"} Jan 28 12:17:49 crc kubenswrapper[4804]: I0128 12:17:49.815917 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n96p5" event={"ID":"227aeb2b-9a72-4194-8989-a1f38ed1c1fc","Type":"ContainerStarted","Data":"1461707ab43564a244305b14d5c52007e50423eb9fb018aeff25d988a85fcd4d"} Jan 28 12:17:49 crc kubenswrapper[4804]: I0128 12:17:49.818738 4804 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 12:17:50 crc kubenswrapper[4804]: I0128 12:17:50.824842 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n96p5" event={"ID":"227aeb2b-9a72-4194-8989-a1f38ed1c1fc","Type":"ContainerStarted","Data":"d39d3f3d9e734119d1731a4b66194da17cd232e9e7b4df4c0d4594356663df0d"} Jan 28 12:17:51 crc kubenswrapper[4804]: I0128 12:17:51.834541 4804 generic.go:334] "Generic (PLEG): container finished" podID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" containerID="d39d3f3d9e734119d1731a4b66194da17cd232e9e7b4df4c0d4594356663df0d" exitCode=0 Jan 28 12:17:51 crc kubenswrapper[4804]: I0128 12:17:51.834618 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n96p5" event={"ID":"227aeb2b-9a72-4194-8989-a1f38ed1c1fc","Type":"ContainerDied","Data":"d39d3f3d9e734119d1731a4b66194da17cd232e9e7b4df4c0d4594356663df0d"} Jan 28 12:17:51 crc kubenswrapper[4804]: I0128 12:17:51.834752 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n96p5" event={"ID":"227aeb2b-9a72-4194-8989-a1f38ed1c1fc","Type":"ContainerStarted","Data":"d6a3a8de520aca058f2d55e0c6ccaa8d7dbb2775150ff351e3a1e5c747073409"} Jan 28 12:17:51 crc kubenswrapper[4804]: I0128 12:17:51.856695 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-n96p5" podStartSLOduration=3.410841289 podStartE2EDuration="4.856677599s" podCreationTimestamp="2026-01-28 12:17:47 +0000 UTC" firstStartedPulling="2026-01-28 12:17:49.818306245 +0000 UTC m=+3345.613186229" lastFinishedPulling="2026-01-28 12:17:51.264142555 +0000 UTC m=+3347.059022539" observedRunningTime="2026-01-28 12:17:51.855284976 +0000 UTC m=+3347.650164980" watchObservedRunningTime="2026-01-28 12:17:51.856677599 +0000 UTC m=+3347.651557593" Jan 28 12:17:58 crc kubenswrapper[4804]: I0128 12:17:58.283255 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:58 crc kubenswrapper[4804]: I0128 12:17:58.283815 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:58 crc kubenswrapper[4804]: I0128 12:17:58.340117 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:58 crc kubenswrapper[4804]: I0128 12:17:58.932199 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:17:58 crc kubenswrapper[4804]: I0128 12:17:58.981119 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n96p5"] Jan 28 12:18:00 crc kubenswrapper[4804]: I0128 12:18:00.897463 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-n96p5" podUID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" containerName="registry-server" containerID="cri-o://d6a3a8de520aca058f2d55e0c6ccaa8d7dbb2775150ff351e3a1e5c747073409" gracePeriod=2 Jan 28 12:18:01 crc kubenswrapper[4804]: I0128 12:18:01.908362 4804 generic.go:334] "Generic (PLEG): container finished" podID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" containerID="d6a3a8de520aca058f2d55e0c6ccaa8d7dbb2775150ff351e3a1e5c747073409" exitCode=0 Jan 28 12:18:01 crc kubenswrapper[4804]: I0128 12:18:01.908513 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n96p5" event={"ID":"227aeb2b-9a72-4194-8989-a1f38ed1c1fc","Type":"ContainerDied","Data":"d6a3a8de520aca058f2d55e0c6ccaa8d7dbb2775150ff351e3a1e5c747073409"} Jan 28 12:18:01 crc kubenswrapper[4804]: I0128 12:18:01.908718 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-n96p5" event={"ID":"227aeb2b-9a72-4194-8989-a1f38ed1c1fc","Type":"ContainerDied","Data":"1461707ab43564a244305b14d5c52007e50423eb9fb018aeff25d988a85fcd4d"} Jan 28 12:18:01 crc kubenswrapper[4804]: I0128 12:18:01.908745 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1461707ab43564a244305b14d5c52007e50423eb9fb018aeff25d988a85fcd4d" Jan 28 12:18:01 crc kubenswrapper[4804]: I0128 12:18:01.920073 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.025131 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-catalog-content\") pod \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.025207 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f96td\" (UniqueName: \"kubernetes.io/projected/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-kube-api-access-f96td\") pod \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.025254 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-utilities\") pod \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\" (UID: \"227aeb2b-9a72-4194-8989-a1f38ed1c1fc\") " Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.026270 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-utilities" (OuterVolumeSpecName: "utilities") pod "227aeb2b-9a72-4194-8989-a1f38ed1c1fc" (UID: "227aeb2b-9a72-4194-8989-a1f38ed1c1fc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.036679 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-kube-api-access-f96td" (OuterVolumeSpecName: "kube-api-access-f96td") pod "227aeb2b-9a72-4194-8989-a1f38ed1c1fc" (UID: "227aeb2b-9a72-4194-8989-a1f38ed1c1fc"). InnerVolumeSpecName "kube-api-access-f96td". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.077808 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "227aeb2b-9a72-4194-8989-a1f38ed1c1fc" (UID: "227aeb2b-9a72-4194-8989-a1f38ed1c1fc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.126845 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.126884 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f96td\" (UniqueName: \"kubernetes.io/projected/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-kube-api-access-f96td\") on node \"crc\" DevicePath \"\"" Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.126935 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/227aeb2b-9a72-4194-8989-a1f38ed1c1fc-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.915474 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-n96p5" Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.952072 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-n96p5"] Jan 28 12:18:02 crc kubenswrapper[4804]: I0128 12:18:02.958595 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-n96p5"] Jan 28 12:18:04 crc kubenswrapper[4804]: I0128 12:18:04.931778 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" path="/var/lib/kubelet/pods/227aeb2b-9a72-4194-8989-a1f38ed1c1fc/volumes" Jan 28 12:18:42 crc kubenswrapper[4804]: I0128 12:18:42.582630 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:18:42 crc kubenswrapper[4804]: I0128 12:18:42.583345 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:19:12 crc kubenswrapper[4804]: I0128 12:19:12.582506 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:19:12 crc kubenswrapper[4804]: I0128 12:19:12.583169 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.402198 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-r22km"] Jan 28 12:19:13 crc kubenswrapper[4804]: E0128 12:19:13.402727 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" containerName="extract-content" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.402738 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" containerName="extract-content" Jan 28 12:19:13 crc kubenswrapper[4804]: E0128 12:19:13.402767 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" containerName="extract-utilities" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.402774 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" containerName="extract-utilities" Jan 28 12:19:13 crc kubenswrapper[4804]: E0128 12:19:13.402783 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" containerName="registry-server" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.402790 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" containerName="registry-server" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.402952 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="227aeb2b-9a72-4194-8989-a1f38ed1c1fc" containerName="registry-server" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.403955 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.431584 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r22km"] Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.594504 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-catalog-content\") pod \"certified-operators-r22km\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.594932 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4thk\" (UniqueName: \"kubernetes.io/projected/5d905c97-1bab-4517-885a-c30ce8c59b3c-kube-api-access-b4thk\") pod \"certified-operators-r22km\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.594978 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-utilities\") pod \"certified-operators-r22km\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.696554 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-utilities\") pod \"certified-operators-r22km\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.696685 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-catalog-content\") pod \"certified-operators-r22km\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.696818 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-utilities\") pod \"certified-operators-r22km\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.697130 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-catalog-content\") pod \"certified-operators-r22km\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.697195 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4thk\" (UniqueName: \"kubernetes.io/projected/5d905c97-1bab-4517-885a-c30ce8c59b3c-kube-api-access-b4thk\") pod \"certified-operators-r22km\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.727550 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4thk\" (UniqueName: \"kubernetes.io/projected/5d905c97-1bab-4517-885a-c30ce8c59b3c-kube-api-access-b4thk\") pod \"certified-operators-r22km\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:13 crc kubenswrapper[4804]: I0128 12:19:13.739583 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:14 crc kubenswrapper[4804]: I0128 12:19:14.070751 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-r22km"] Jan 28 12:19:14 crc kubenswrapper[4804]: I0128 12:19:14.424508 4804 generic.go:334] "Generic (PLEG): container finished" podID="5d905c97-1bab-4517-885a-c30ce8c59b3c" containerID="0fca0ffd9087a1a8164750a25ce8eef3229a92f49385ac268bc4c3d84b8561ca" exitCode=0 Jan 28 12:19:14 crc kubenswrapper[4804]: I0128 12:19:14.424555 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r22km" event={"ID":"5d905c97-1bab-4517-885a-c30ce8c59b3c","Type":"ContainerDied","Data":"0fca0ffd9087a1a8164750a25ce8eef3229a92f49385ac268bc4c3d84b8561ca"} Jan 28 12:19:14 crc kubenswrapper[4804]: I0128 12:19:14.424797 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r22km" event={"ID":"5d905c97-1bab-4517-885a-c30ce8c59b3c","Type":"ContainerStarted","Data":"456da1abdd937ecd19faef71326544de6e697da231fd39d63420c50ca22d3910"} Jan 28 12:19:15 crc kubenswrapper[4804]: I0128 12:19:15.436050 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r22km" event={"ID":"5d905c97-1bab-4517-885a-c30ce8c59b3c","Type":"ContainerStarted","Data":"c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36"} Jan 28 12:19:16 crc kubenswrapper[4804]: I0128 12:19:16.445211 4804 generic.go:334] "Generic (PLEG): container finished" podID="5d905c97-1bab-4517-885a-c30ce8c59b3c" containerID="c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36" exitCode=0 Jan 28 12:19:16 crc kubenswrapper[4804]: I0128 12:19:16.445289 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r22km" event={"ID":"5d905c97-1bab-4517-885a-c30ce8c59b3c","Type":"ContainerDied","Data":"c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36"} Jan 28 12:19:17 crc kubenswrapper[4804]: I0128 12:19:17.454486 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r22km" event={"ID":"5d905c97-1bab-4517-885a-c30ce8c59b3c","Type":"ContainerStarted","Data":"84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e"} Jan 28 12:19:18 crc kubenswrapper[4804]: I0128 12:19:18.483053 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-r22km" podStartSLOduration=2.668039816 podStartE2EDuration="5.483035136s" podCreationTimestamp="2026-01-28 12:19:13 +0000 UTC" firstStartedPulling="2026-01-28 12:19:14.42605343 +0000 UTC m=+3430.220933414" lastFinishedPulling="2026-01-28 12:19:17.24104875 +0000 UTC m=+3433.035928734" observedRunningTime="2026-01-28 12:19:18.478008039 +0000 UTC m=+3434.272888023" watchObservedRunningTime="2026-01-28 12:19:18.483035136 +0000 UTC m=+3434.277915120" Jan 28 12:19:23 crc kubenswrapper[4804]: I0128 12:19:23.740242 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:23 crc kubenswrapper[4804]: I0128 12:19:23.740652 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:23 crc kubenswrapper[4804]: I0128 12:19:23.781215 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:24 crc kubenswrapper[4804]: I0128 12:19:24.549349 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:24 crc kubenswrapper[4804]: I0128 12:19:24.594366 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r22km"] Jan 28 12:19:26 crc kubenswrapper[4804]: I0128 12:19:26.521613 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-r22km" podUID="5d905c97-1bab-4517-885a-c30ce8c59b3c" containerName="registry-server" containerID="cri-o://84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e" gracePeriod=2 Jan 28 12:19:26 crc kubenswrapper[4804]: I0128 12:19:26.924734 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.082945 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-utilities\") pod \"5d905c97-1bab-4517-885a-c30ce8c59b3c\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.083365 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-catalog-content\") pod \"5d905c97-1bab-4517-885a-c30ce8c59b3c\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.083585 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b4thk\" (UniqueName: \"kubernetes.io/projected/5d905c97-1bab-4517-885a-c30ce8c59b3c-kube-api-access-b4thk\") pod \"5d905c97-1bab-4517-885a-c30ce8c59b3c\" (UID: \"5d905c97-1bab-4517-885a-c30ce8c59b3c\") " Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.083899 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-utilities" (OuterVolumeSpecName: "utilities") pod "5d905c97-1bab-4517-885a-c30ce8c59b3c" (UID: "5d905c97-1bab-4517-885a-c30ce8c59b3c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.084173 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.088631 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d905c97-1bab-4517-885a-c30ce8c59b3c-kube-api-access-b4thk" (OuterVolumeSpecName: "kube-api-access-b4thk") pod "5d905c97-1bab-4517-885a-c30ce8c59b3c" (UID: "5d905c97-1bab-4517-885a-c30ce8c59b3c"). InnerVolumeSpecName "kube-api-access-b4thk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.137246 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d905c97-1bab-4517-885a-c30ce8c59b3c" (UID: "5d905c97-1bab-4517-885a-c30ce8c59b3c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.185788 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d905c97-1bab-4517-885a-c30ce8c59b3c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.185826 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b4thk\" (UniqueName: \"kubernetes.io/projected/5d905c97-1bab-4517-885a-c30ce8c59b3c-kube-api-access-b4thk\") on node \"crc\" DevicePath \"\"" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.530102 4804 generic.go:334] "Generic (PLEG): container finished" podID="5d905c97-1bab-4517-885a-c30ce8c59b3c" containerID="84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e" exitCode=0 Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.530219 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-r22km" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.530211 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r22km" event={"ID":"5d905c97-1bab-4517-885a-c30ce8c59b3c","Type":"ContainerDied","Data":"84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e"} Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.530814 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-r22km" event={"ID":"5d905c97-1bab-4517-885a-c30ce8c59b3c","Type":"ContainerDied","Data":"456da1abdd937ecd19faef71326544de6e697da231fd39d63420c50ca22d3910"} Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.530839 4804 scope.go:117] "RemoveContainer" containerID="84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.554105 4804 scope.go:117] "RemoveContainer" containerID="c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.562483 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-r22km"] Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.567960 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-r22km"] Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.591497 4804 scope.go:117] "RemoveContainer" containerID="0fca0ffd9087a1a8164750a25ce8eef3229a92f49385ac268bc4c3d84b8561ca" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.608404 4804 scope.go:117] "RemoveContainer" containerID="84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e" Jan 28 12:19:27 crc kubenswrapper[4804]: E0128 12:19:27.608860 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e\": container with ID starting with 84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e not found: ID does not exist" containerID="84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.608905 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e"} err="failed to get container status \"84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e\": rpc error: code = NotFound desc = could not find container \"84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e\": container with ID starting with 84ddc39b2c7db379b0cff886b733ff2ef856fb66e947b2095321566d95029d5e not found: ID does not exist" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.608926 4804 scope.go:117] "RemoveContainer" containerID="c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36" Jan 28 12:19:27 crc kubenswrapper[4804]: E0128 12:19:27.609222 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36\": container with ID starting with c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36 not found: ID does not exist" containerID="c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.609240 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36"} err="failed to get container status \"c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36\": rpc error: code = NotFound desc = could not find container \"c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36\": container with ID starting with c5f385bc7280da6678bc99349715f1de1b264b82bde7f3a04ba2acf22c79eb36 not found: ID does not exist" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.609251 4804 scope.go:117] "RemoveContainer" containerID="0fca0ffd9087a1a8164750a25ce8eef3229a92f49385ac268bc4c3d84b8561ca" Jan 28 12:19:27 crc kubenswrapper[4804]: E0128 12:19:27.609543 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fca0ffd9087a1a8164750a25ce8eef3229a92f49385ac268bc4c3d84b8561ca\": container with ID starting with 0fca0ffd9087a1a8164750a25ce8eef3229a92f49385ac268bc4c3d84b8561ca not found: ID does not exist" containerID="0fca0ffd9087a1a8164750a25ce8eef3229a92f49385ac268bc4c3d84b8561ca" Jan 28 12:19:27 crc kubenswrapper[4804]: I0128 12:19:27.609569 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fca0ffd9087a1a8164750a25ce8eef3229a92f49385ac268bc4c3d84b8561ca"} err="failed to get container status \"0fca0ffd9087a1a8164750a25ce8eef3229a92f49385ac268bc4c3d84b8561ca\": rpc error: code = NotFound desc = could not find container \"0fca0ffd9087a1a8164750a25ce8eef3229a92f49385ac268bc4c3d84b8561ca\": container with ID starting with 0fca0ffd9087a1a8164750a25ce8eef3229a92f49385ac268bc4c3d84b8561ca not found: ID does not exist" Jan 28 12:19:28 crc kubenswrapper[4804]: I0128 12:19:28.924561 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d905c97-1bab-4517-885a-c30ce8c59b3c" path="/var/lib/kubelet/pods/5d905c97-1bab-4517-885a-c30ce8c59b3c/volumes" Jan 28 12:19:42 crc kubenswrapper[4804]: I0128 12:19:42.582474 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:19:42 crc kubenswrapper[4804]: I0128 12:19:42.583273 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:19:42 crc kubenswrapper[4804]: I0128 12:19:42.583340 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 12:19:42 crc kubenswrapper[4804]: I0128 12:19:42.584226 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"bc09c3a58bfeacbb95f858a207ee4e75804e1451287317e8d420ed980a50ed4e"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 12:19:42 crc kubenswrapper[4804]: I0128 12:19:42.584346 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://bc09c3a58bfeacbb95f858a207ee4e75804e1451287317e8d420ed980a50ed4e" gracePeriod=600 Jan 28 12:19:43 crc kubenswrapper[4804]: I0128 12:19:43.643675 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="bc09c3a58bfeacbb95f858a207ee4e75804e1451287317e8d420ed980a50ed4e" exitCode=0 Jan 28 12:19:43 crc kubenswrapper[4804]: I0128 12:19:43.643717 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"bc09c3a58bfeacbb95f858a207ee4e75804e1451287317e8d420ed980a50ed4e"} Jan 28 12:19:43 crc kubenswrapper[4804]: I0128 12:19:43.644209 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a"} Jan 28 12:19:43 crc kubenswrapper[4804]: I0128 12:19:43.644252 4804 scope.go:117] "RemoveContainer" containerID="e196bda6489aebf8081134e643ccf8f385673b837a0db40c7266e8bb042ee85d" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.577572 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j9s5x"] Jan 28 12:19:51 crc kubenswrapper[4804]: E0128 12:19:51.578442 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d905c97-1bab-4517-885a-c30ce8c59b3c" containerName="registry-server" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.578456 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d905c97-1bab-4517-885a-c30ce8c59b3c" containerName="registry-server" Jan 28 12:19:51 crc kubenswrapper[4804]: E0128 12:19:51.578473 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d905c97-1bab-4517-885a-c30ce8c59b3c" containerName="extract-content" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.578481 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d905c97-1bab-4517-885a-c30ce8c59b3c" containerName="extract-content" Jan 28 12:19:51 crc kubenswrapper[4804]: E0128 12:19:51.578494 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5d905c97-1bab-4517-885a-c30ce8c59b3c" containerName="extract-utilities" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.578501 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d905c97-1bab-4517-885a-c30ce8c59b3c" containerName="extract-utilities" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.578696 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="5d905c97-1bab-4517-885a-c30ce8c59b3c" containerName="registry-server" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.579869 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.599990 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9s5x"] Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.740637 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-catalog-content\") pod \"redhat-marketplace-j9s5x\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.740713 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-utilities\") pod \"redhat-marketplace-j9s5x\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.740861 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2bt4l\" (UniqueName: \"kubernetes.io/projected/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-kube-api-access-2bt4l\") pod \"redhat-marketplace-j9s5x\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.841574 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-utilities\") pod \"redhat-marketplace-j9s5x\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.841664 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2bt4l\" (UniqueName: \"kubernetes.io/projected/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-kube-api-access-2bt4l\") pod \"redhat-marketplace-j9s5x\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.841717 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-catalog-content\") pod \"redhat-marketplace-j9s5x\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.842230 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-catalog-content\") pod \"redhat-marketplace-j9s5x\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.842664 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-utilities\") pod \"redhat-marketplace-j9s5x\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.866285 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2bt4l\" (UniqueName: \"kubernetes.io/projected/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-kube-api-access-2bt4l\") pod \"redhat-marketplace-j9s5x\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:19:51 crc kubenswrapper[4804]: I0128 12:19:51.899636 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:19:52 crc kubenswrapper[4804]: I0128 12:19:52.351802 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9s5x"] Jan 28 12:19:52 crc kubenswrapper[4804]: W0128 12:19:52.357522 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb1e4fe89_15ca_4c38_b6e0_3ebbdc7ce0fa.slice/crio-11eb98c881313ae03eaee5b0f6f9bd50b214d46141ee40f31e239803daa13e06 WatchSource:0}: Error finding container 11eb98c881313ae03eaee5b0f6f9bd50b214d46141ee40f31e239803daa13e06: Status 404 returned error can't find the container with id 11eb98c881313ae03eaee5b0f6f9bd50b214d46141ee40f31e239803daa13e06 Jan 28 12:19:52 crc kubenswrapper[4804]: I0128 12:19:52.711230 4804 generic.go:334] "Generic (PLEG): container finished" podID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" containerID="e118617f0948e3281225d5aca745a203d6c49874b4ddc46ea84fbd3e712c4297" exitCode=0 Jan 28 12:19:52 crc kubenswrapper[4804]: I0128 12:19:52.711275 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9s5x" event={"ID":"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa","Type":"ContainerDied","Data":"e118617f0948e3281225d5aca745a203d6c49874b4ddc46ea84fbd3e712c4297"} Jan 28 12:19:52 crc kubenswrapper[4804]: I0128 12:19:52.711303 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9s5x" event={"ID":"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa","Type":"ContainerStarted","Data":"11eb98c881313ae03eaee5b0f6f9bd50b214d46141ee40f31e239803daa13e06"} Jan 28 12:19:53 crc kubenswrapper[4804]: I0128 12:19:53.719989 4804 generic.go:334] "Generic (PLEG): container finished" podID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" containerID="e819fb99b6b318e2b65b168c9766f83a0bb238db0f3141f8bb101ec41e42d26d" exitCode=0 Jan 28 12:19:53 crc kubenswrapper[4804]: I0128 12:19:53.720111 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9s5x" event={"ID":"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa","Type":"ContainerDied","Data":"e819fb99b6b318e2b65b168c9766f83a0bb238db0f3141f8bb101ec41e42d26d"} Jan 28 12:19:54 crc kubenswrapper[4804]: I0128 12:19:54.729727 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9s5x" event={"ID":"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa","Type":"ContainerStarted","Data":"b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb"} Jan 28 12:19:54 crc kubenswrapper[4804]: I0128 12:19:54.752946 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j9s5x" podStartSLOduration=2.314967958 podStartE2EDuration="3.752925914s" podCreationTimestamp="2026-01-28 12:19:51 +0000 UTC" firstStartedPulling="2026-01-28 12:19:52.712851792 +0000 UTC m=+3468.507731776" lastFinishedPulling="2026-01-28 12:19:54.150809718 +0000 UTC m=+3469.945689732" observedRunningTime="2026-01-28 12:19:54.75154691 +0000 UTC m=+3470.546426904" watchObservedRunningTime="2026-01-28 12:19:54.752925914 +0000 UTC m=+3470.547805898" Jan 28 12:20:01 crc kubenswrapper[4804]: I0128 12:20:01.900274 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:20:01 crc kubenswrapper[4804]: I0128 12:20:01.901482 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:20:01 crc kubenswrapper[4804]: I0128 12:20:01.945912 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:20:02 crc kubenswrapper[4804]: I0128 12:20:02.821685 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:20:02 crc kubenswrapper[4804]: I0128 12:20:02.870662 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9s5x"] Jan 28 12:20:04 crc kubenswrapper[4804]: I0128 12:20:04.797184 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j9s5x" podUID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" containerName="registry-server" containerID="cri-o://b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb" gracePeriod=2 Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.261243 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.332720 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-catalog-content\") pod \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.332865 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2bt4l\" (UniqueName: \"kubernetes.io/projected/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-kube-api-access-2bt4l\") pod \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.332910 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-utilities\") pod \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\" (UID: \"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa\") " Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.333774 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-utilities" (OuterVolumeSpecName: "utilities") pod "b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" (UID: "b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.338378 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-kube-api-access-2bt4l" (OuterVolumeSpecName: "kube-api-access-2bt4l") pod "b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" (UID: "b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa"). InnerVolumeSpecName "kube-api-access-2bt4l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.359200 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" (UID: "b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.434989 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.435026 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2bt4l\" (UniqueName: \"kubernetes.io/projected/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-kube-api-access-2bt4l\") on node \"crc\" DevicePath \"\"" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.435039 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.804278 4804 generic.go:334] "Generic (PLEG): container finished" podID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" containerID="b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb" exitCode=0 Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.804319 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9s5x" event={"ID":"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa","Type":"ContainerDied","Data":"b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb"} Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.804344 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j9s5x" event={"ID":"b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa","Type":"ContainerDied","Data":"11eb98c881313ae03eaee5b0f6f9bd50b214d46141ee40f31e239803daa13e06"} Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.804362 4804 scope.go:117] "RemoveContainer" containerID="b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.804459 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j9s5x" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.822300 4804 scope.go:117] "RemoveContainer" containerID="e819fb99b6b318e2b65b168c9766f83a0bb238db0f3141f8bb101ec41e42d26d" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.836795 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9s5x"] Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.843188 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j9s5x"] Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.853814 4804 scope.go:117] "RemoveContainer" containerID="e118617f0948e3281225d5aca745a203d6c49874b4ddc46ea84fbd3e712c4297" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.879982 4804 scope.go:117] "RemoveContainer" containerID="b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb" Jan 28 12:20:05 crc kubenswrapper[4804]: E0128 12:20:05.882725 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb\": container with ID starting with b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb not found: ID does not exist" containerID="b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.882778 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb"} err="failed to get container status \"b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb\": rpc error: code = NotFound desc = could not find container \"b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb\": container with ID starting with b73e923dfd903310323646596080a19b29a2994b349ef8afff9bbf25903a01cb not found: ID does not exist" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.882806 4804 scope.go:117] "RemoveContainer" containerID="e819fb99b6b318e2b65b168c9766f83a0bb238db0f3141f8bb101ec41e42d26d" Jan 28 12:20:05 crc kubenswrapper[4804]: E0128 12:20:05.884139 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e819fb99b6b318e2b65b168c9766f83a0bb238db0f3141f8bb101ec41e42d26d\": container with ID starting with e819fb99b6b318e2b65b168c9766f83a0bb238db0f3141f8bb101ec41e42d26d not found: ID does not exist" containerID="e819fb99b6b318e2b65b168c9766f83a0bb238db0f3141f8bb101ec41e42d26d" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.884179 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e819fb99b6b318e2b65b168c9766f83a0bb238db0f3141f8bb101ec41e42d26d"} err="failed to get container status \"e819fb99b6b318e2b65b168c9766f83a0bb238db0f3141f8bb101ec41e42d26d\": rpc error: code = NotFound desc = could not find container \"e819fb99b6b318e2b65b168c9766f83a0bb238db0f3141f8bb101ec41e42d26d\": container with ID starting with e819fb99b6b318e2b65b168c9766f83a0bb238db0f3141f8bb101ec41e42d26d not found: ID does not exist" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.884205 4804 scope.go:117] "RemoveContainer" containerID="e118617f0948e3281225d5aca745a203d6c49874b4ddc46ea84fbd3e712c4297" Jan 28 12:20:05 crc kubenswrapper[4804]: E0128 12:20:05.884651 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e118617f0948e3281225d5aca745a203d6c49874b4ddc46ea84fbd3e712c4297\": container with ID starting with e118617f0948e3281225d5aca745a203d6c49874b4ddc46ea84fbd3e712c4297 not found: ID does not exist" containerID="e118617f0948e3281225d5aca745a203d6c49874b4ddc46ea84fbd3e712c4297" Jan 28 12:20:05 crc kubenswrapper[4804]: I0128 12:20:05.884691 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e118617f0948e3281225d5aca745a203d6c49874b4ddc46ea84fbd3e712c4297"} err="failed to get container status \"e118617f0948e3281225d5aca745a203d6c49874b4ddc46ea84fbd3e712c4297\": rpc error: code = NotFound desc = could not find container \"e118617f0948e3281225d5aca745a203d6c49874b4ddc46ea84fbd3e712c4297\": container with ID starting with e118617f0948e3281225d5aca745a203d6c49874b4ddc46ea84fbd3e712c4297 not found: ID does not exist" Jan 28 12:20:06 crc kubenswrapper[4804]: I0128 12:20:06.931712 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" path="/var/lib/kubelet/pods/b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa/volumes" Jan 28 12:21:42 crc kubenswrapper[4804]: I0128 12:21:42.581843 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:21:42 crc kubenswrapper[4804]: I0128 12:21:42.582394 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:22:12 crc kubenswrapper[4804]: I0128 12:22:12.582347 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:22:12 crc kubenswrapper[4804]: I0128 12:22:12.582809 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:22:42 crc kubenswrapper[4804]: I0128 12:22:42.582310 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:22:42 crc kubenswrapper[4804]: I0128 12:22:42.582991 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:22:42 crc kubenswrapper[4804]: I0128 12:22:42.583058 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 12:22:42 crc kubenswrapper[4804]: I0128 12:22:42.583676 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 12:22:42 crc kubenswrapper[4804]: I0128 12:22:42.583736 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" gracePeriod=600 Jan 28 12:22:42 crc kubenswrapper[4804]: I0128 12:22:42.897262 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" exitCode=0 Jan 28 12:22:42 crc kubenswrapper[4804]: I0128 12:22:42.897321 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a"} Jan 28 12:22:42 crc kubenswrapper[4804]: I0128 12:22:42.897668 4804 scope.go:117] "RemoveContainer" containerID="bc09c3a58bfeacbb95f858a207ee4e75804e1451287317e8d420ed980a50ed4e" Jan 28 12:22:43 crc kubenswrapper[4804]: E0128 12:22:43.300334 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:22:43 crc kubenswrapper[4804]: I0128 12:22:43.904815 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:22:43 crc kubenswrapper[4804]: E0128 12:22:43.905630 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:22:57 crc kubenswrapper[4804]: I0128 12:22:57.915618 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:22:57 crc kubenswrapper[4804]: E0128 12:22:57.916629 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:23:11 crc kubenswrapper[4804]: I0128 12:23:11.914950 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:23:11 crc kubenswrapper[4804]: E0128 12:23:11.915925 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.502658 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-k4676"] Jan 28 12:23:16 crc kubenswrapper[4804]: E0128 12:23:16.503325 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" containerName="extract-content" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.503337 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" containerName="extract-content" Jan 28 12:23:16 crc kubenswrapper[4804]: E0128 12:23:16.503353 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" containerName="registry-server" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.503359 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" containerName="registry-server" Jan 28 12:23:16 crc kubenswrapper[4804]: E0128 12:23:16.503380 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" containerName="extract-utilities" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.503386 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" containerName="extract-utilities" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.503546 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1e4fe89-15ca-4c38-b6e0-3ebbdc7ce0fa" containerName="registry-server" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.504424 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.518421 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k4676"] Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.595187 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-utilities\") pod \"redhat-operators-k4676\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.595271 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-catalog-content\") pod \"redhat-operators-k4676\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.595366 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dggqg\" (UniqueName: \"kubernetes.io/projected/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-kube-api-access-dggqg\") pod \"redhat-operators-k4676\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.696440 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dggqg\" (UniqueName: \"kubernetes.io/projected/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-kube-api-access-dggqg\") pod \"redhat-operators-k4676\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.696544 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-utilities\") pod \"redhat-operators-k4676\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.696596 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-catalog-content\") pod \"redhat-operators-k4676\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.697303 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-utilities\") pod \"redhat-operators-k4676\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.697382 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-catalog-content\") pod \"redhat-operators-k4676\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.717725 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dggqg\" (UniqueName: \"kubernetes.io/projected/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-kube-api-access-dggqg\") pod \"redhat-operators-k4676\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:16 crc kubenswrapper[4804]: I0128 12:23:16.826152 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:17 crc kubenswrapper[4804]: I0128 12:23:17.293472 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-k4676"] Jan 28 12:23:17 crc kubenswrapper[4804]: I0128 12:23:17.356062 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4676" event={"ID":"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e","Type":"ContainerStarted","Data":"4b15619cc1b1a276b7a17289d167063b4c95da5e04237997b496ec98be8c4e08"} Jan 28 12:23:18 crc kubenswrapper[4804]: I0128 12:23:18.362816 4804 generic.go:334] "Generic (PLEG): container finished" podID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerID="00a189a86c1cd24cb7257f23ccf5d63af9671102f4a6b5059dc3253a4b8e8955" exitCode=0 Jan 28 12:23:18 crc kubenswrapper[4804]: I0128 12:23:18.362919 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4676" event={"ID":"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e","Type":"ContainerDied","Data":"00a189a86c1cd24cb7257f23ccf5d63af9671102f4a6b5059dc3253a4b8e8955"} Jan 28 12:23:18 crc kubenswrapper[4804]: I0128 12:23:18.364483 4804 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 12:23:20 crc kubenswrapper[4804]: I0128 12:23:20.381543 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4676" event={"ID":"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e","Type":"ContainerStarted","Data":"02c4d0b8966475afdf876a5f943d19a27ab1e56b368cae957aa5b97c99887ffc"} Jan 28 12:23:21 crc kubenswrapper[4804]: I0128 12:23:21.391811 4804 generic.go:334] "Generic (PLEG): container finished" podID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerID="02c4d0b8966475afdf876a5f943d19a27ab1e56b368cae957aa5b97c99887ffc" exitCode=0 Jan 28 12:23:21 crc kubenswrapper[4804]: I0128 12:23:21.391860 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4676" event={"ID":"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e","Type":"ContainerDied","Data":"02c4d0b8966475afdf876a5f943d19a27ab1e56b368cae957aa5b97c99887ffc"} Jan 28 12:23:22 crc kubenswrapper[4804]: I0128 12:23:22.400768 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4676" event={"ID":"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e","Type":"ContainerStarted","Data":"0e5e98052fb832d0ad8813c0a5cca76e9b801dd58e6e870b461c2d96f2c0a2d7"} Jan 28 12:23:22 crc kubenswrapper[4804]: I0128 12:23:22.427272 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-k4676" podStartSLOduration=3.023284493 podStartE2EDuration="6.427256061s" podCreationTimestamp="2026-01-28 12:23:16 +0000 UTC" firstStartedPulling="2026-01-28 12:23:18.364186849 +0000 UTC m=+3674.159066833" lastFinishedPulling="2026-01-28 12:23:21.768158417 +0000 UTC m=+3677.563038401" observedRunningTime="2026-01-28 12:23:22.42181246 +0000 UTC m=+3678.216692444" watchObservedRunningTime="2026-01-28 12:23:22.427256061 +0000 UTC m=+3678.222136045" Jan 28 12:23:26 crc kubenswrapper[4804]: I0128 12:23:26.826667 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:26 crc kubenswrapper[4804]: I0128 12:23:26.827304 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:26 crc kubenswrapper[4804]: I0128 12:23:26.915295 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:23:26 crc kubenswrapper[4804]: E0128 12:23:26.915621 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:23:27 crc kubenswrapper[4804]: I0128 12:23:27.887519 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-k4676" podUID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerName="registry-server" probeResult="failure" output=< Jan 28 12:23:27 crc kubenswrapper[4804]: timeout: failed to connect service ":50051" within 1s Jan 28 12:23:27 crc kubenswrapper[4804]: > Jan 28 12:23:36 crc kubenswrapper[4804]: I0128 12:23:36.869110 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:36 crc kubenswrapper[4804]: I0128 12:23:36.922339 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:41 crc kubenswrapper[4804]: I0128 12:23:41.228012 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k4676"] Jan 28 12:23:41 crc kubenswrapper[4804]: I0128 12:23:41.230015 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-k4676" podUID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerName="registry-server" containerID="cri-o://0e5e98052fb832d0ad8813c0a5cca76e9b801dd58e6e870b461c2d96f2c0a2d7" gracePeriod=2 Jan 28 12:23:41 crc kubenswrapper[4804]: I0128 12:23:41.541104 4804 generic.go:334] "Generic (PLEG): container finished" podID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerID="0e5e98052fb832d0ad8813c0a5cca76e9b801dd58e6e870b461c2d96f2c0a2d7" exitCode=0 Jan 28 12:23:41 crc kubenswrapper[4804]: I0128 12:23:41.541178 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4676" event={"ID":"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e","Type":"ContainerDied","Data":"0e5e98052fb832d0ad8813c0a5cca76e9b801dd58e6e870b461c2d96f2c0a2d7"} Jan 28 12:23:41 crc kubenswrapper[4804]: I0128 12:23:41.915026 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:23:41 crc kubenswrapper[4804]: E0128 12:23:41.915273 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.202897 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.271645 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-catalog-content\") pod \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.271709 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dggqg\" (UniqueName: \"kubernetes.io/projected/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-kube-api-access-dggqg\") pod \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.271781 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-utilities\") pod \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\" (UID: \"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e\") " Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.272795 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-utilities" (OuterVolumeSpecName: "utilities") pod "e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" (UID: "e2e3d0a3-fa19-4faf-b90a-85c7fb91266e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.277343 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-kube-api-access-dggqg" (OuterVolumeSpecName: "kube-api-access-dggqg") pod "e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" (UID: "e2e3d0a3-fa19-4faf-b90a-85c7fb91266e"). InnerVolumeSpecName "kube-api-access-dggqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.374714 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.374760 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dggqg\" (UniqueName: \"kubernetes.io/projected/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-kube-api-access-dggqg\") on node \"crc\" DevicePath \"\"" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.398115 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" (UID: "e2e3d0a3-fa19-4faf-b90a-85c7fb91266e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.476806 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.552545 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-k4676" event={"ID":"e2e3d0a3-fa19-4faf-b90a-85c7fb91266e","Type":"ContainerDied","Data":"4b15619cc1b1a276b7a17289d167063b4c95da5e04237997b496ec98be8c4e08"} Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.552611 4804 scope.go:117] "RemoveContainer" containerID="0e5e98052fb832d0ad8813c0a5cca76e9b801dd58e6e870b461c2d96f2c0a2d7" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.552657 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-k4676" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.569753 4804 scope.go:117] "RemoveContainer" containerID="02c4d0b8966475afdf876a5f943d19a27ab1e56b368cae957aa5b97c99887ffc" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.585835 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-k4676"] Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.593401 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-k4676"] Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.618308 4804 scope.go:117] "RemoveContainer" containerID="00a189a86c1cd24cb7257f23ccf5d63af9671102f4a6b5059dc3253a4b8e8955" Jan 28 12:23:42 crc kubenswrapper[4804]: I0128 12:23:42.925498 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" path="/var/lib/kubelet/pods/e2e3d0a3-fa19-4faf-b90a-85c7fb91266e/volumes" Jan 28 12:23:56 crc kubenswrapper[4804]: I0128 12:23:56.915122 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:23:56 crc kubenswrapper[4804]: E0128 12:23:56.916601 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:24:07 crc kubenswrapper[4804]: I0128 12:24:07.915044 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:24:07 crc kubenswrapper[4804]: E0128 12:24:07.915780 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:24:21 crc kubenswrapper[4804]: I0128 12:24:21.909866 4804 scope.go:117] "RemoveContainer" containerID="2efbc03490ed43572699ec444996e43e335aa0f68aab150fa2a1ae8f5fa13a00" Jan 28 12:24:21 crc kubenswrapper[4804]: I0128 12:24:21.915228 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:24:21 crc kubenswrapper[4804]: E0128 12:24:21.915455 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:24:21 crc kubenswrapper[4804]: I0128 12:24:21.935462 4804 scope.go:117] "RemoveContainer" containerID="d39d3f3d9e734119d1731a4b66194da17cd232e9e7b4df4c0d4594356663df0d" Jan 28 12:24:21 crc kubenswrapper[4804]: I0128 12:24:21.962442 4804 scope.go:117] "RemoveContainer" containerID="d6a3a8de520aca058f2d55e0c6ccaa8d7dbb2775150ff351e3a1e5c747073409" Jan 28 12:24:35 crc kubenswrapper[4804]: I0128 12:24:35.915097 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:24:35 crc kubenswrapper[4804]: E0128 12:24:35.917139 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:24:47 crc kubenswrapper[4804]: I0128 12:24:47.915412 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:24:47 crc kubenswrapper[4804]: E0128 12:24:47.916183 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:24:58 crc kubenswrapper[4804]: I0128 12:24:58.915447 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:24:58 crc kubenswrapper[4804]: E0128 12:24:58.916082 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:25:13 crc kubenswrapper[4804]: I0128 12:25:13.915427 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:25:13 crc kubenswrapper[4804]: E0128 12:25:13.916540 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:25:25 crc kubenswrapper[4804]: I0128 12:25:25.915339 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:25:25 crc kubenswrapper[4804]: E0128 12:25:25.916295 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:25:36 crc kubenswrapper[4804]: I0128 12:25:36.915378 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:25:36 crc kubenswrapper[4804]: E0128 12:25:36.918391 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:25:49 crc kubenswrapper[4804]: I0128 12:25:49.916162 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:25:49 crc kubenswrapper[4804]: E0128 12:25:49.917448 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:26:01 crc kubenswrapper[4804]: I0128 12:26:01.915248 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:26:01 crc kubenswrapper[4804]: E0128 12:26:01.916030 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:26:14 crc kubenswrapper[4804]: I0128 12:26:14.919016 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:26:14 crc kubenswrapper[4804]: E0128 12:26:14.919739 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:26:25 crc kubenswrapper[4804]: I0128 12:26:25.915467 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:26:25 crc kubenswrapper[4804]: E0128 12:26:25.916484 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:26:39 crc kubenswrapper[4804]: I0128 12:26:39.915735 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:26:39 crc kubenswrapper[4804]: E0128 12:26:39.919435 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:26:51 crc kubenswrapper[4804]: I0128 12:26:51.915212 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:26:51 crc kubenswrapper[4804]: E0128 12:26:51.916272 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:27:05 crc kubenswrapper[4804]: I0128 12:27:05.915640 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:27:05 crc kubenswrapper[4804]: E0128 12:27:05.916359 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:27:19 crc kubenswrapper[4804]: I0128 12:27:19.915082 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:27:19 crc kubenswrapper[4804]: E0128 12:27:19.915797 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:27:32 crc kubenswrapper[4804]: I0128 12:27:32.914990 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:27:32 crc kubenswrapper[4804]: E0128 12:27:32.916964 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:27:47 crc kubenswrapper[4804]: I0128 12:27:47.915512 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:27:48 crc kubenswrapper[4804]: I0128 12:27:48.386139 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"2f722375e5ff5e739f4fa1631080addfc6120b6c84d42b00241ff639a1e25177"} Jan 28 12:28:44 crc kubenswrapper[4804]: I0128 12:28:44.884803 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-j2mnv"] Jan 28 12:28:44 crc kubenswrapper[4804]: E0128 12:28:44.886843 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerName="registry-server" Jan 28 12:28:44 crc kubenswrapper[4804]: I0128 12:28:44.893149 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerName="registry-server" Jan 28 12:28:44 crc kubenswrapper[4804]: E0128 12:28:44.893330 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerName="extract-utilities" Jan 28 12:28:44 crc kubenswrapper[4804]: I0128 12:28:44.893410 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerName="extract-utilities" Jan 28 12:28:44 crc kubenswrapper[4804]: E0128 12:28:44.893493 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerName="extract-content" Jan 28 12:28:44 crc kubenswrapper[4804]: I0128 12:28:44.893566 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerName="extract-content" Jan 28 12:28:44 crc kubenswrapper[4804]: I0128 12:28:44.893946 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2e3d0a3-fa19-4faf-b90a-85c7fb91266e" containerName="registry-server" Jan 28 12:28:44 crc kubenswrapper[4804]: I0128 12:28:44.895204 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:44 crc kubenswrapper[4804]: I0128 12:28:44.895438 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j2mnv"] Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.086818 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-utilities\") pod \"community-operators-j2mnv\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.086919 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzjb6\" (UniqueName: \"kubernetes.io/projected/98806e2d-b65e-409f-b942-e8c1d833c27b-kube-api-access-tzjb6\") pod \"community-operators-j2mnv\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.087123 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-catalog-content\") pod \"community-operators-j2mnv\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.187929 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-utilities\") pod \"community-operators-j2mnv\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.187972 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzjb6\" (UniqueName: \"kubernetes.io/projected/98806e2d-b65e-409f-b942-e8c1d833c27b-kube-api-access-tzjb6\") pod \"community-operators-j2mnv\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.188021 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-catalog-content\") pod \"community-operators-j2mnv\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.188504 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-catalog-content\") pod \"community-operators-j2mnv\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.188747 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-utilities\") pod \"community-operators-j2mnv\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.209387 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzjb6\" (UniqueName: \"kubernetes.io/projected/98806e2d-b65e-409f-b942-e8c1d833c27b-kube-api-access-tzjb6\") pod \"community-operators-j2mnv\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.223930 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.712285 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-j2mnv"] Jan 28 12:28:45 crc kubenswrapper[4804]: I0128 12:28:45.777809 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2mnv" event={"ID":"98806e2d-b65e-409f-b942-e8c1d833c27b","Type":"ContainerStarted","Data":"dd209b0c3b982a568dd0db1a3f08e9013293b854458609a8faa07aeb543cea0a"} Jan 28 12:28:46 crc kubenswrapper[4804]: I0128 12:28:46.784447 4804 generic.go:334] "Generic (PLEG): container finished" podID="98806e2d-b65e-409f-b942-e8c1d833c27b" containerID="c3ce3c84245e7f576169633de895dd9ba89f877aa24b668d7acdcb8a349726f4" exitCode=0 Jan 28 12:28:46 crc kubenswrapper[4804]: I0128 12:28:46.784647 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2mnv" event={"ID":"98806e2d-b65e-409f-b942-e8c1d833c27b","Type":"ContainerDied","Data":"c3ce3c84245e7f576169633de895dd9ba89f877aa24b668d7acdcb8a349726f4"} Jan 28 12:28:46 crc kubenswrapper[4804]: I0128 12:28:46.786511 4804 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 12:28:48 crc kubenswrapper[4804]: I0128 12:28:48.799173 4804 generic.go:334] "Generic (PLEG): container finished" podID="98806e2d-b65e-409f-b942-e8c1d833c27b" containerID="1509f204e2d017ee71854fc1bc7d10fd79bf2f974643db094d191ee08eda96c5" exitCode=0 Jan 28 12:28:48 crc kubenswrapper[4804]: I0128 12:28:48.799213 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2mnv" event={"ID":"98806e2d-b65e-409f-b942-e8c1d833c27b","Type":"ContainerDied","Data":"1509f204e2d017ee71854fc1bc7d10fd79bf2f974643db094d191ee08eda96c5"} Jan 28 12:28:51 crc kubenswrapper[4804]: I0128 12:28:51.820731 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2mnv" event={"ID":"98806e2d-b65e-409f-b942-e8c1d833c27b","Type":"ContainerStarted","Data":"97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b"} Jan 28 12:28:51 crc kubenswrapper[4804]: I0128 12:28:51.847962 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-j2mnv" podStartSLOduration=3.961645325 podStartE2EDuration="7.847928404s" podCreationTimestamp="2026-01-28 12:28:44 +0000 UTC" firstStartedPulling="2026-01-28 12:28:46.786310285 +0000 UTC m=+4002.581190269" lastFinishedPulling="2026-01-28 12:28:50.672593324 +0000 UTC m=+4006.467473348" observedRunningTime="2026-01-28 12:28:51.839075767 +0000 UTC m=+4007.633955751" watchObservedRunningTime="2026-01-28 12:28:51.847928404 +0000 UTC m=+4007.642808388" Jan 28 12:28:55 crc kubenswrapper[4804]: I0128 12:28:55.225108 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:55 crc kubenswrapper[4804]: I0128 12:28:55.225455 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:55 crc kubenswrapper[4804]: I0128 12:28:55.275068 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:56 crc kubenswrapper[4804]: I0128 12:28:56.305415 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:56 crc kubenswrapper[4804]: I0128 12:28:56.358677 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j2mnv"] Jan 28 12:28:57 crc kubenswrapper[4804]: I0128 12:28:57.856813 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-j2mnv" podUID="98806e2d-b65e-409f-b942-e8c1d833c27b" containerName="registry-server" containerID="cri-o://97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b" gracePeriod=2 Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.244129 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.357130 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-catalog-content\") pod \"98806e2d-b65e-409f-b942-e8c1d833c27b\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.357196 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tzjb6\" (UniqueName: \"kubernetes.io/projected/98806e2d-b65e-409f-b942-e8c1d833c27b-kube-api-access-tzjb6\") pod \"98806e2d-b65e-409f-b942-e8c1d833c27b\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.357231 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-utilities\") pod \"98806e2d-b65e-409f-b942-e8c1d833c27b\" (UID: \"98806e2d-b65e-409f-b942-e8c1d833c27b\") " Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.358592 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-utilities" (OuterVolumeSpecName: "utilities") pod "98806e2d-b65e-409f-b942-e8c1d833c27b" (UID: "98806e2d-b65e-409f-b942-e8c1d833c27b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.369989 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98806e2d-b65e-409f-b942-e8c1d833c27b-kube-api-access-tzjb6" (OuterVolumeSpecName: "kube-api-access-tzjb6") pod "98806e2d-b65e-409f-b942-e8c1d833c27b" (UID: "98806e2d-b65e-409f-b942-e8c1d833c27b"). InnerVolumeSpecName "kube-api-access-tzjb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.411937 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98806e2d-b65e-409f-b942-e8c1d833c27b" (UID: "98806e2d-b65e-409f-b942-e8c1d833c27b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.458685 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.458723 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tzjb6\" (UniqueName: \"kubernetes.io/projected/98806e2d-b65e-409f-b942-e8c1d833c27b-kube-api-access-tzjb6\") on node \"crc\" DevicePath \"\"" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.458735 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98806e2d-b65e-409f-b942-e8c1d833c27b-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.863960 4804 generic.go:334] "Generic (PLEG): container finished" podID="98806e2d-b65e-409f-b942-e8c1d833c27b" containerID="97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b" exitCode=0 Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.864014 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2mnv" event={"ID":"98806e2d-b65e-409f-b942-e8c1d833c27b","Type":"ContainerDied","Data":"97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b"} Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.864045 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-j2mnv" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.864076 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-j2mnv" event={"ID":"98806e2d-b65e-409f-b942-e8c1d833c27b","Type":"ContainerDied","Data":"dd209b0c3b982a568dd0db1a3f08e9013293b854458609a8faa07aeb543cea0a"} Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.864097 4804 scope.go:117] "RemoveContainer" containerID="97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.888599 4804 scope.go:117] "RemoveContainer" containerID="1509f204e2d017ee71854fc1bc7d10fd79bf2f974643db094d191ee08eda96c5" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.900782 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-j2mnv"] Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.907830 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-j2mnv"] Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.917489 4804 scope.go:117] "RemoveContainer" containerID="c3ce3c84245e7f576169633de895dd9ba89f877aa24b668d7acdcb8a349726f4" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.926567 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98806e2d-b65e-409f-b942-e8c1d833c27b" path="/var/lib/kubelet/pods/98806e2d-b65e-409f-b942-e8c1d833c27b/volumes" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.936343 4804 scope.go:117] "RemoveContainer" containerID="97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b" Jan 28 12:28:58 crc kubenswrapper[4804]: E0128 12:28:58.936924 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b\": container with ID starting with 97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b not found: ID does not exist" containerID="97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.936975 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b"} err="failed to get container status \"97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b\": rpc error: code = NotFound desc = could not find container \"97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b\": container with ID starting with 97598e2a24319049417071975ffac8fa75af5313886ff3e33c7fdf022717660b not found: ID does not exist" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.937029 4804 scope.go:117] "RemoveContainer" containerID="1509f204e2d017ee71854fc1bc7d10fd79bf2f974643db094d191ee08eda96c5" Jan 28 12:28:58 crc kubenswrapper[4804]: E0128 12:28:58.937645 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1509f204e2d017ee71854fc1bc7d10fd79bf2f974643db094d191ee08eda96c5\": container with ID starting with 1509f204e2d017ee71854fc1bc7d10fd79bf2f974643db094d191ee08eda96c5 not found: ID does not exist" containerID="1509f204e2d017ee71854fc1bc7d10fd79bf2f974643db094d191ee08eda96c5" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.937697 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1509f204e2d017ee71854fc1bc7d10fd79bf2f974643db094d191ee08eda96c5"} err="failed to get container status \"1509f204e2d017ee71854fc1bc7d10fd79bf2f974643db094d191ee08eda96c5\": rpc error: code = NotFound desc = could not find container \"1509f204e2d017ee71854fc1bc7d10fd79bf2f974643db094d191ee08eda96c5\": container with ID starting with 1509f204e2d017ee71854fc1bc7d10fd79bf2f974643db094d191ee08eda96c5 not found: ID does not exist" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.937732 4804 scope.go:117] "RemoveContainer" containerID="c3ce3c84245e7f576169633de895dd9ba89f877aa24b668d7acdcb8a349726f4" Jan 28 12:28:58 crc kubenswrapper[4804]: E0128 12:28:58.938214 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c3ce3c84245e7f576169633de895dd9ba89f877aa24b668d7acdcb8a349726f4\": container with ID starting with c3ce3c84245e7f576169633de895dd9ba89f877aa24b668d7acdcb8a349726f4 not found: ID does not exist" containerID="c3ce3c84245e7f576169633de895dd9ba89f877aa24b668d7acdcb8a349726f4" Jan 28 12:28:58 crc kubenswrapper[4804]: I0128 12:28:58.938265 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c3ce3c84245e7f576169633de895dd9ba89f877aa24b668d7acdcb8a349726f4"} err="failed to get container status \"c3ce3c84245e7f576169633de895dd9ba89f877aa24b668d7acdcb8a349726f4\": rpc error: code = NotFound desc = could not find container \"c3ce3c84245e7f576169633de895dd9ba89f877aa24b668d7acdcb8a349726f4\": container with ID starting with c3ce3c84245e7f576169633de895dd9ba89f877aa24b668d7acdcb8a349726f4 not found: ID does not exist" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.174733 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk"] Jan 28 12:30:00 crc kubenswrapper[4804]: E0128 12:30:00.175714 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98806e2d-b65e-409f-b942-e8c1d833c27b" containerName="extract-utilities" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.175731 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="98806e2d-b65e-409f-b942-e8c1d833c27b" containerName="extract-utilities" Jan 28 12:30:00 crc kubenswrapper[4804]: E0128 12:30:00.175746 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98806e2d-b65e-409f-b942-e8c1d833c27b" containerName="extract-content" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.175754 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="98806e2d-b65e-409f-b942-e8c1d833c27b" containerName="extract-content" Jan 28 12:30:00 crc kubenswrapper[4804]: E0128 12:30:00.175771 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98806e2d-b65e-409f-b942-e8c1d833c27b" containerName="registry-server" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.175778 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="98806e2d-b65e-409f-b942-e8c1d833c27b" containerName="registry-server" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.175988 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="98806e2d-b65e-409f-b942-e8c1d833c27b" containerName="registry-server" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.176722 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.179076 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.179314 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.185451 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk"] Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.313867 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxtzk\" (UniqueName: \"kubernetes.io/projected/f090523c-e035-4be4-8124-2946e5bbe8a3-kube-api-access-cxtzk\") pod \"collect-profiles-29493390-wm9xk\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.314248 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f090523c-e035-4be4-8124-2946e5bbe8a3-secret-volume\") pod \"collect-profiles-29493390-wm9xk\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.314329 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f090523c-e035-4be4-8124-2946e5bbe8a3-config-volume\") pod \"collect-profiles-29493390-wm9xk\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.415327 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f090523c-e035-4be4-8124-2946e5bbe8a3-config-volume\") pod \"collect-profiles-29493390-wm9xk\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.415408 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cxtzk\" (UniqueName: \"kubernetes.io/projected/f090523c-e035-4be4-8124-2946e5bbe8a3-kube-api-access-cxtzk\") pod \"collect-profiles-29493390-wm9xk\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.415434 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f090523c-e035-4be4-8124-2946e5bbe8a3-secret-volume\") pod \"collect-profiles-29493390-wm9xk\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.416848 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f090523c-e035-4be4-8124-2946e5bbe8a3-config-volume\") pod \"collect-profiles-29493390-wm9xk\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.421074 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f090523c-e035-4be4-8124-2946e5bbe8a3-secret-volume\") pod \"collect-profiles-29493390-wm9xk\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.433378 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cxtzk\" (UniqueName: \"kubernetes.io/projected/f090523c-e035-4be4-8124-2946e5bbe8a3-kube-api-access-cxtzk\") pod \"collect-profiles-29493390-wm9xk\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.518028 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:00 crc kubenswrapper[4804]: I0128 12:30:00.928962 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk"] Jan 28 12:30:01 crc kubenswrapper[4804]: I0128 12:30:01.284476 4804 generic.go:334] "Generic (PLEG): container finished" podID="f090523c-e035-4be4-8124-2946e5bbe8a3" containerID="e8ea830423bf5163ac427bed1acd7b672005786797bb0ad27e9106f80ca5a96e" exitCode=0 Jan 28 12:30:01 crc kubenswrapper[4804]: I0128 12:30:01.284537 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" event={"ID":"f090523c-e035-4be4-8124-2946e5bbe8a3","Type":"ContainerDied","Data":"e8ea830423bf5163ac427bed1acd7b672005786797bb0ad27e9106f80ca5a96e"} Jan 28 12:30:01 crc kubenswrapper[4804]: I0128 12:30:01.284649 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" event={"ID":"f090523c-e035-4be4-8124-2946e5bbe8a3","Type":"ContainerStarted","Data":"9d3b8e35f948624a0e54e6fa86a9b91cc379ce5ca88590bc129ad97ef7655dc5"} Jan 28 12:30:02 crc kubenswrapper[4804]: I0128 12:30:02.584768 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:02 crc kubenswrapper[4804]: I0128 12:30:02.645638 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f090523c-e035-4be4-8124-2946e5bbe8a3-config-volume\") pod \"f090523c-e035-4be4-8124-2946e5bbe8a3\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " Jan 28 12:30:02 crc kubenswrapper[4804]: I0128 12:30:02.645742 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cxtzk\" (UniqueName: \"kubernetes.io/projected/f090523c-e035-4be4-8124-2946e5bbe8a3-kube-api-access-cxtzk\") pod \"f090523c-e035-4be4-8124-2946e5bbe8a3\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " Jan 28 12:30:02 crc kubenswrapper[4804]: I0128 12:30:02.645769 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f090523c-e035-4be4-8124-2946e5bbe8a3-secret-volume\") pod \"f090523c-e035-4be4-8124-2946e5bbe8a3\" (UID: \"f090523c-e035-4be4-8124-2946e5bbe8a3\") " Jan 28 12:30:02 crc kubenswrapper[4804]: I0128 12:30:02.646416 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f090523c-e035-4be4-8124-2946e5bbe8a3-config-volume" (OuterVolumeSpecName: "config-volume") pod "f090523c-e035-4be4-8124-2946e5bbe8a3" (UID: "f090523c-e035-4be4-8124-2946e5bbe8a3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 28 12:30:02 crc kubenswrapper[4804]: I0128 12:30:02.652789 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f090523c-e035-4be4-8124-2946e5bbe8a3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f090523c-e035-4be4-8124-2946e5bbe8a3" (UID: "f090523c-e035-4be4-8124-2946e5bbe8a3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 28 12:30:02 crc kubenswrapper[4804]: I0128 12:30:02.653780 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f090523c-e035-4be4-8124-2946e5bbe8a3-kube-api-access-cxtzk" (OuterVolumeSpecName: "kube-api-access-cxtzk") pod "f090523c-e035-4be4-8124-2946e5bbe8a3" (UID: "f090523c-e035-4be4-8124-2946e5bbe8a3"). InnerVolumeSpecName "kube-api-access-cxtzk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:30:02 crc kubenswrapper[4804]: I0128 12:30:02.747628 4804 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f090523c-e035-4be4-8124-2946e5bbe8a3-config-volume\") on node \"crc\" DevicePath \"\"" Jan 28 12:30:02 crc kubenswrapper[4804]: I0128 12:30:02.748077 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cxtzk\" (UniqueName: \"kubernetes.io/projected/f090523c-e035-4be4-8124-2946e5bbe8a3-kube-api-access-cxtzk\") on node \"crc\" DevicePath \"\"" Jan 28 12:30:02 crc kubenswrapper[4804]: I0128 12:30:02.748094 4804 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f090523c-e035-4be4-8124-2946e5bbe8a3-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 28 12:30:03 crc kubenswrapper[4804]: I0128 12:30:03.300291 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" event={"ID":"f090523c-e035-4be4-8124-2946e5bbe8a3","Type":"ContainerDied","Data":"9d3b8e35f948624a0e54e6fa86a9b91cc379ce5ca88590bc129ad97ef7655dc5"} Jan 28 12:30:03 crc kubenswrapper[4804]: I0128 12:30:03.300329 4804 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d3b8e35f948624a0e54e6fa86a9b91cc379ce5ca88590bc129ad97ef7655dc5" Jan 28 12:30:03 crc kubenswrapper[4804]: I0128 12:30:03.300406 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29493390-wm9xk" Jan 28 12:30:03 crc kubenswrapper[4804]: I0128 12:30:03.675362 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr"] Jan 28 12:30:03 crc kubenswrapper[4804]: I0128 12:30:03.680100 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29493345-psbzr"] Jan 28 12:30:04 crc kubenswrapper[4804]: I0128 12:30:04.922398 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="deda2a52-b6b6-4b65-87d2-26a7ca06a7dc" path="/var/lib/kubelet/pods/deda2a52-b6b6-4b65-87d2-26a7ca06a7dc/volumes" Jan 28 12:30:12 crc kubenswrapper[4804]: I0128 12:30:12.582190 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:30:12 crc kubenswrapper[4804]: I0128 12:30:12.583402 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:30:22 crc kubenswrapper[4804]: I0128 12:30:22.100713 4804 scope.go:117] "RemoveContainer" containerID="ec4494c033a2934fc01293e9dd81cb1af39c7d20a6e53ef7ee0ed4ef65497625" Jan 28 12:30:31 crc kubenswrapper[4804]: I0128 12:30:31.982041 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-tl2sb"] Jan 28 12:30:31 crc kubenswrapper[4804]: E0128 12:30:31.982993 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f090523c-e035-4be4-8124-2946e5bbe8a3" containerName="collect-profiles" Jan 28 12:30:31 crc kubenswrapper[4804]: I0128 12:30:31.983008 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="f090523c-e035-4be4-8124-2946e5bbe8a3" containerName="collect-profiles" Jan 28 12:30:31 crc kubenswrapper[4804]: I0128 12:30:31.983182 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="f090523c-e035-4be4-8124-2946e5bbe8a3" containerName="collect-profiles" Jan 28 12:30:31 crc kubenswrapper[4804]: I0128 12:30:31.984353 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:31 crc kubenswrapper[4804]: I0128 12:30:31.995666 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl2sb"] Jan 28 12:30:32 crc kubenswrapper[4804]: I0128 12:30:32.064640 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-utilities\") pod \"redhat-marketplace-tl2sb\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:32 crc kubenswrapper[4804]: I0128 12:30:32.064784 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jwrzf\" (UniqueName: \"kubernetes.io/projected/d1fb8773-4961-41e7-9111-b828c5e51c99-kube-api-access-jwrzf\") pod \"redhat-marketplace-tl2sb\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:32 crc kubenswrapper[4804]: I0128 12:30:32.064823 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-catalog-content\") pod \"redhat-marketplace-tl2sb\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:32 crc kubenswrapper[4804]: I0128 12:30:32.165932 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jwrzf\" (UniqueName: \"kubernetes.io/projected/d1fb8773-4961-41e7-9111-b828c5e51c99-kube-api-access-jwrzf\") pod \"redhat-marketplace-tl2sb\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:32 crc kubenswrapper[4804]: I0128 12:30:32.165987 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-catalog-content\") pod \"redhat-marketplace-tl2sb\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:32 crc kubenswrapper[4804]: I0128 12:30:32.166034 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-utilities\") pod \"redhat-marketplace-tl2sb\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:32 crc kubenswrapper[4804]: I0128 12:30:32.166580 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-utilities\") pod \"redhat-marketplace-tl2sb\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:32 crc kubenswrapper[4804]: I0128 12:30:32.166629 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-catalog-content\") pod \"redhat-marketplace-tl2sb\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:32 crc kubenswrapper[4804]: I0128 12:30:32.188862 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jwrzf\" (UniqueName: \"kubernetes.io/projected/d1fb8773-4961-41e7-9111-b828c5e51c99-kube-api-access-jwrzf\") pod \"redhat-marketplace-tl2sb\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:32 crc kubenswrapper[4804]: I0128 12:30:32.307549 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:32 crc kubenswrapper[4804]: I0128 12:30:32.722611 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl2sb"] Jan 28 12:30:33 crc kubenswrapper[4804]: I0128 12:30:33.532147 4804 generic.go:334] "Generic (PLEG): container finished" podID="d1fb8773-4961-41e7-9111-b828c5e51c99" containerID="6064dbf6010f2aa6f81f5c7dcd281311b7ca9c6692316ea3e5b18332d73d8612" exitCode=0 Jan 28 12:30:33 crc kubenswrapper[4804]: I0128 12:30:33.532212 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl2sb" event={"ID":"d1fb8773-4961-41e7-9111-b828c5e51c99","Type":"ContainerDied","Data":"6064dbf6010f2aa6f81f5c7dcd281311b7ca9c6692316ea3e5b18332d73d8612"} Jan 28 12:30:33 crc kubenswrapper[4804]: I0128 12:30:33.532246 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl2sb" event={"ID":"d1fb8773-4961-41e7-9111-b828c5e51c99","Type":"ContainerStarted","Data":"139cec1f5b9901b8ffc89b73806b83f5779d5e9653677ae4096b5740d84e1e64"} Jan 28 12:30:34 crc kubenswrapper[4804]: I0128 12:30:34.543336 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl2sb" event={"ID":"d1fb8773-4961-41e7-9111-b828c5e51c99","Type":"ContainerStarted","Data":"288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d"} Jan 28 12:30:35 crc kubenswrapper[4804]: I0128 12:30:35.550625 4804 generic.go:334] "Generic (PLEG): container finished" podID="d1fb8773-4961-41e7-9111-b828c5e51c99" containerID="288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d" exitCode=0 Jan 28 12:30:35 crc kubenswrapper[4804]: I0128 12:30:35.550953 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl2sb" event={"ID":"d1fb8773-4961-41e7-9111-b828c5e51c99","Type":"ContainerDied","Data":"288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d"} Jan 28 12:30:36 crc kubenswrapper[4804]: I0128 12:30:36.559479 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl2sb" event={"ID":"d1fb8773-4961-41e7-9111-b828c5e51c99","Type":"ContainerStarted","Data":"32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd"} Jan 28 12:30:36 crc kubenswrapper[4804]: I0128 12:30:36.577661 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-tl2sb" podStartSLOduration=3.156965048 podStartE2EDuration="5.577647137s" podCreationTimestamp="2026-01-28 12:30:31 +0000 UTC" firstStartedPulling="2026-01-28 12:30:33.53383634 +0000 UTC m=+4109.328716324" lastFinishedPulling="2026-01-28 12:30:35.954518439 +0000 UTC m=+4111.749398413" observedRunningTime="2026-01-28 12:30:36.575200481 +0000 UTC m=+4112.370080465" watchObservedRunningTime="2026-01-28 12:30:36.577647137 +0000 UTC m=+4112.372527121" Jan 28 12:30:42 crc kubenswrapper[4804]: I0128 12:30:42.308050 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:42 crc kubenswrapper[4804]: I0128 12:30:42.308653 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:42 crc kubenswrapper[4804]: I0128 12:30:42.363208 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:42 crc kubenswrapper[4804]: I0128 12:30:42.582223 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:30:42 crc kubenswrapper[4804]: I0128 12:30:42.582288 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:30:42 crc kubenswrapper[4804]: I0128 12:30:42.648392 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:42 crc kubenswrapper[4804]: I0128 12:30:42.689822 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl2sb"] Jan 28 12:30:44 crc kubenswrapper[4804]: I0128 12:30:44.619365 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-tl2sb" podUID="d1fb8773-4961-41e7-9111-b828c5e51c99" containerName="registry-server" containerID="cri-o://32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd" gracePeriod=2 Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.309874 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.339385 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-catalog-content\") pod \"d1fb8773-4961-41e7-9111-b828c5e51c99\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.339436 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jwrzf\" (UniqueName: \"kubernetes.io/projected/d1fb8773-4961-41e7-9111-b828c5e51c99-kube-api-access-jwrzf\") pod \"d1fb8773-4961-41e7-9111-b828c5e51c99\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.339557 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-utilities\") pod \"d1fb8773-4961-41e7-9111-b828c5e51c99\" (UID: \"d1fb8773-4961-41e7-9111-b828c5e51c99\") " Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.340706 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-utilities" (OuterVolumeSpecName: "utilities") pod "d1fb8773-4961-41e7-9111-b828c5e51c99" (UID: "d1fb8773-4961-41e7-9111-b828c5e51c99"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.350142 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1fb8773-4961-41e7-9111-b828c5e51c99-kube-api-access-jwrzf" (OuterVolumeSpecName: "kube-api-access-jwrzf") pod "d1fb8773-4961-41e7-9111-b828c5e51c99" (UID: "d1fb8773-4961-41e7-9111-b828c5e51c99"). InnerVolumeSpecName "kube-api-access-jwrzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.441095 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.441131 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jwrzf\" (UniqueName: \"kubernetes.io/projected/d1fb8773-4961-41e7-9111-b828c5e51c99-kube-api-access-jwrzf\") on node \"crc\" DevicePath \"\"" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.627118 4804 generic.go:334] "Generic (PLEG): container finished" podID="d1fb8773-4961-41e7-9111-b828c5e51c99" containerID="32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd" exitCode=0 Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.627166 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl2sb" event={"ID":"d1fb8773-4961-41e7-9111-b828c5e51c99","Type":"ContainerDied","Data":"32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd"} Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.627204 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-tl2sb" event={"ID":"d1fb8773-4961-41e7-9111-b828c5e51c99","Type":"ContainerDied","Data":"139cec1f5b9901b8ffc89b73806b83f5779d5e9653677ae4096b5740d84e1e64"} Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.627227 4804 scope.go:117] "RemoveContainer" containerID="32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.627237 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-tl2sb" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.653227 4804 scope.go:117] "RemoveContainer" containerID="288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.738751 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d1fb8773-4961-41e7-9111-b828c5e51c99" (UID: "d1fb8773-4961-41e7-9111-b828c5e51c99"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.746044 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d1fb8773-4961-41e7-9111-b828c5e51c99-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.754618 4804 scope.go:117] "RemoveContainer" containerID="6064dbf6010f2aa6f81f5c7dcd281311b7ca9c6692316ea3e5b18332d73d8612" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.770938 4804 scope.go:117] "RemoveContainer" containerID="32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd" Jan 28 12:30:45 crc kubenswrapper[4804]: E0128 12:30:45.771283 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd\": container with ID starting with 32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd not found: ID does not exist" containerID="32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.771312 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd"} err="failed to get container status \"32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd\": rpc error: code = NotFound desc = could not find container \"32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd\": container with ID starting with 32087fe68566a6fce293f5a612d27e1fd44c5ca4fed42d9d487e7769f23b1dfd not found: ID does not exist" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.771332 4804 scope.go:117] "RemoveContainer" containerID="288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d" Jan 28 12:30:45 crc kubenswrapper[4804]: E0128 12:30:45.771581 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d\": container with ID starting with 288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d not found: ID does not exist" containerID="288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.771605 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d"} err="failed to get container status \"288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d\": rpc error: code = NotFound desc = could not find container \"288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d\": container with ID starting with 288c2466b5d26a883add0518815647ba66577e00f126bc988bc1cd4152d9e93d not found: ID does not exist" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.771620 4804 scope.go:117] "RemoveContainer" containerID="6064dbf6010f2aa6f81f5c7dcd281311b7ca9c6692316ea3e5b18332d73d8612" Jan 28 12:30:45 crc kubenswrapper[4804]: E0128 12:30:45.771858 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6064dbf6010f2aa6f81f5c7dcd281311b7ca9c6692316ea3e5b18332d73d8612\": container with ID starting with 6064dbf6010f2aa6f81f5c7dcd281311b7ca9c6692316ea3e5b18332d73d8612 not found: ID does not exist" containerID="6064dbf6010f2aa6f81f5c7dcd281311b7ca9c6692316ea3e5b18332d73d8612" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.771894 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6064dbf6010f2aa6f81f5c7dcd281311b7ca9c6692316ea3e5b18332d73d8612"} err="failed to get container status \"6064dbf6010f2aa6f81f5c7dcd281311b7ca9c6692316ea3e5b18332d73d8612\": rpc error: code = NotFound desc = could not find container \"6064dbf6010f2aa6f81f5c7dcd281311b7ca9c6692316ea3e5b18332d73d8612\": container with ID starting with 6064dbf6010f2aa6f81f5c7dcd281311b7ca9c6692316ea3e5b18332d73d8612 not found: ID does not exist" Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.960274 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl2sb"] Jan 28 12:30:45 crc kubenswrapper[4804]: I0128 12:30:45.970367 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-tl2sb"] Jan 28 12:30:46 crc kubenswrapper[4804]: I0128 12:30:46.939530 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1fb8773-4961-41e7-9111-b828c5e51c99" path="/var/lib/kubelet/pods/d1fb8773-4961-41e7-9111-b828c5e51c99/volumes" Jan 28 12:31:12 crc kubenswrapper[4804]: I0128 12:31:12.582374 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:31:12 crc kubenswrapper[4804]: I0128 12:31:12.583585 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:31:12 crc kubenswrapper[4804]: I0128 12:31:12.583685 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 12:31:12 crc kubenswrapper[4804]: I0128 12:31:12.584817 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2f722375e5ff5e739f4fa1631080addfc6120b6c84d42b00241ff639a1e25177"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 12:31:12 crc kubenswrapper[4804]: I0128 12:31:12.584919 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://2f722375e5ff5e739f4fa1631080addfc6120b6c84d42b00241ff639a1e25177" gracePeriod=600 Jan 28 12:31:12 crc kubenswrapper[4804]: I0128 12:31:12.809059 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="2f722375e5ff5e739f4fa1631080addfc6120b6c84d42b00241ff639a1e25177" exitCode=0 Jan 28 12:31:12 crc kubenswrapper[4804]: I0128 12:31:12.809244 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"2f722375e5ff5e739f4fa1631080addfc6120b6c84d42b00241ff639a1e25177"} Jan 28 12:31:12 crc kubenswrapper[4804]: I0128 12:31:12.809276 4804 scope.go:117] "RemoveContainer" containerID="86f68d741bedf9566d9225cedd38faf28811a0b40223fd6c8edb8ab22850779a" Jan 28 12:31:13 crc kubenswrapper[4804]: I0128 12:31:13.821327 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355"} Jan 28 12:33:12 crc kubenswrapper[4804]: I0128 12:33:12.582511 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:33:12 crc kubenswrapper[4804]: I0128 12:33:12.583120 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:33:42 crc kubenswrapper[4804]: I0128 12:33:42.582346 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:33:42 crc kubenswrapper[4804]: I0128 12:33:42.582908 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:34:12 crc kubenswrapper[4804]: I0128 12:34:12.582360 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:34:12 crc kubenswrapper[4804]: I0128 12:34:12.582984 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:34:12 crc kubenswrapper[4804]: I0128 12:34:12.583030 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 12:34:12 crc kubenswrapper[4804]: I0128 12:34:12.583584 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 12:34:12 crc kubenswrapper[4804]: I0128 12:34:12.583639 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" gracePeriod=600 Jan 28 12:34:13 crc kubenswrapper[4804]: E0128 12:34:13.245362 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:34:13 crc kubenswrapper[4804]: I0128 12:34:13.404248 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" exitCode=0 Jan 28 12:34:13 crc kubenswrapper[4804]: I0128 12:34:13.404314 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355"} Jan 28 12:34:13 crc kubenswrapper[4804]: I0128 12:34:13.404365 4804 scope.go:117] "RemoveContainer" containerID="2f722375e5ff5e739f4fa1631080addfc6120b6c84d42b00241ff639a1e25177" Jan 28 12:34:13 crc kubenswrapper[4804]: I0128 12:34:13.405358 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:34:13 crc kubenswrapper[4804]: E0128 12:34:13.405817 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:34:25 crc kubenswrapper[4804]: I0128 12:34:25.914934 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:34:25 crc kubenswrapper[4804]: E0128 12:34:25.915625 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:34:40 crc kubenswrapper[4804]: I0128 12:34:40.916264 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:34:40 crc kubenswrapper[4804]: E0128 12:34:40.918567 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:34:52 crc kubenswrapper[4804]: I0128 12:34:52.914668 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:34:52 crc kubenswrapper[4804]: E0128 12:34:52.915350 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:35:04 crc kubenswrapper[4804]: I0128 12:35:04.923122 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:35:04 crc kubenswrapper[4804]: E0128 12:35:04.924214 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:35:19 crc kubenswrapper[4804]: I0128 12:35:19.915355 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:35:19 crc kubenswrapper[4804]: E0128 12:35:19.916466 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:35:34 crc kubenswrapper[4804]: I0128 12:35:34.921021 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:35:34 crc kubenswrapper[4804]: E0128 12:35:34.921845 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:35:49 crc kubenswrapper[4804]: I0128 12:35:49.915068 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:35:49 crc kubenswrapper[4804]: E0128 12:35:49.915851 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:36:04 crc kubenswrapper[4804]: I0128 12:36:04.919391 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:36:04 crc kubenswrapper[4804]: E0128 12:36:04.920310 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:36:19 crc kubenswrapper[4804]: I0128 12:36:19.915099 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:36:19 crc kubenswrapper[4804]: E0128 12:36:19.915907 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:36:31 crc kubenswrapper[4804]: I0128 12:36:31.915699 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:36:31 crc kubenswrapper[4804]: E0128 12:36:31.916512 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:36:42 crc kubenswrapper[4804]: I0128 12:36:42.915532 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:36:42 crc kubenswrapper[4804]: E0128 12:36:42.917101 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:36:56 crc kubenswrapper[4804]: I0128 12:36:56.915310 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:36:56 crc kubenswrapper[4804]: E0128 12:36:56.916185 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.529905 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hmsxq"] Jan 28 12:37:08 crc kubenswrapper[4804]: E0128 12:37:08.530866 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1fb8773-4961-41e7-9111-b828c5e51c99" containerName="extract-content" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.530899 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1fb8773-4961-41e7-9111-b828c5e51c99" containerName="extract-content" Jan 28 12:37:08 crc kubenswrapper[4804]: E0128 12:37:08.530923 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1fb8773-4961-41e7-9111-b828c5e51c99" containerName="registry-server" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.530930 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1fb8773-4961-41e7-9111-b828c5e51c99" containerName="registry-server" Jan 28 12:37:08 crc kubenswrapper[4804]: E0128 12:37:08.530952 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1fb8773-4961-41e7-9111-b828c5e51c99" containerName="extract-utilities" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.530962 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1fb8773-4961-41e7-9111-b828c5e51c99" containerName="extract-utilities" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.531125 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1fb8773-4961-41e7-9111-b828c5e51c99" containerName="registry-server" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.532372 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.546234 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hmsxq"] Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.599705 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh9r9\" (UniqueName: \"kubernetes.io/projected/d6565976-3a91-4cc5-9fb6-e564382fdf6e-kube-api-access-xh9r9\") pod \"redhat-operators-hmsxq\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.599749 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-utilities\") pod \"redhat-operators-hmsxq\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.599797 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-catalog-content\") pod \"redhat-operators-hmsxq\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.700982 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xh9r9\" (UniqueName: \"kubernetes.io/projected/d6565976-3a91-4cc5-9fb6-e564382fdf6e-kube-api-access-xh9r9\") pod \"redhat-operators-hmsxq\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.701031 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-utilities\") pod \"redhat-operators-hmsxq\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.701099 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-catalog-content\") pod \"redhat-operators-hmsxq\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.701652 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-catalog-content\") pod \"redhat-operators-hmsxq\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.701709 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-utilities\") pod \"redhat-operators-hmsxq\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.721716 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xh9r9\" (UniqueName: \"kubernetes.io/projected/d6565976-3a91-4cc5-9fb6-e564382fdf6e-kube-api-access-xh9r9\") pod \"redhat-operators-hmsxq\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:08 crc kubenswrapper[4804]: I0128 12:37:08.848811 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.285631 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hmsxq"] Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.532097 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-46g75"] Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.534160 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.548250 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-46g75"] Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.714216 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvlp2\" (UniqueName: \"kubernetes.io/projected/f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9-kube-api-access-lvlp2\") pod \"certified-operators-46g75\" (UID: \"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9\") " pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.714271 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9-utilities\") pod \"certified-operators-46g75\" (UID: \"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9\") " pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.714404 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9-catalog-content\") pod \"certified-operators-46g75\" (UID: \"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9\") " pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.808088 4804 generic.go:334] "Generic (PLEG): container finished" podID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerID="60fe76a65de41cce8c367c6ffab4aa6f356b514a9d3158b59aab700a311236f8" exitCode=0 Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.808132 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmsxq" event={"ID":"d6565976-3a91-4cc5-9fb6-e564382fdf6e","Type":"ContainerDied","Data":"60fe76a65de41cce8c367c6ffab4aa6f356b514a9d3158b59aab700a311236f8"} Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.808163 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmsxq" event={"ID":"d6565976-3a91-4cc5-9fb6-e564382fdf6e","Type":"ContainerStarted","Data":"dcdf674db3a717933ef61b4b228718afbf102de6aef7a1d4dcfe349fcd4ff1b6"} Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.809788 4804 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.815823 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvlp2\" (UniqueName: \"kubernetes.io/projected/f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9-kube-api-access-lvlp2\") pod \"certified-operators-46g75\" (UID: \"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9\") " pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.815935 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9-utilities\") pod \"certified-operators-46g75\" (UID: \"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9\") " pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.816324 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9-catalog-content\") pod \"certified-operators-46g75\" (UID: \"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9\") " pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.816718 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9-utilities\") pod \"certified-operators-46g75\" (UID: \"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9\") " pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.817003 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9-catalog-content\") pod \"certified-operators-46g75\" (UID: \"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9\") " pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.837605 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvlp2\" (UniqueName: \"kubernetes.io/projected/f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9-kube-api-access-lvlp2\") pod \"certified-operators-46g75\" (UID: \"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9\") " pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:09 crc kubenswrapper[4804]: I0128 12:37:09.887693 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:10 crc kubenswrapper[4804]: I0128 12:37:10.193499 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-46g75"] Jan 28 12:37:10 crc kubenswrapper[4804]: I0128 12:37:10.818181 4804 generic.go:334] "Generic (PLEG): container finished" podID="f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9" containerID="c58acf18235fd7df1170d9752645f7eb810deb7e0134c3cfa58e4d677a06e01a" exitCode=0 Jan 28 12:37:10 crc kubenswrapper[4804]: I0128 12:37:10.818300 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46g75" event={"ID":"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9","Type":"ContainerDied","Data":"c58acf18235fd7df1170d9752645f7eb810deb7e0134c3cfa58e4d677a06e01a"} Jan 28 12:37:10 crc kubenswrapper[4804]: I0128 12:37:10.818562 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46g75" event={"ID":"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9","Type":"ContainerStarted","Data":"f8bddb7e636123185da6486c3969316eb394e6023d208820f9123666e8a30726"} Jan 28 12:37:11 crc kubenswrapper[4804]: I0128 12:37:11.828357 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmsxq" event={"ID":"d6565976-3a91-4cc5-9fb6-e564382fdf6e","Type":"ContainerStarted","Data":"e647a87f942c62635415ff89de2b9477ff0f1f887329894c48333d06fed69430"} Jan 28 12:37:11 crc kubenswrapper[4804]: I0128 12:37:11.915583 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:37:11 crc kubenswrapper[4804]: E0128 12:37:11.915819 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:37:12 crc kubenswrapper[4804]: I0128 12:37:12.838120 4804 generic.go:334] "Generic (PLEG): container finished" podID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerID="e647a87f942c62635415ff89de2b9477ff0f1f887329894c48333d06fed69430" exitCode=0 Jan 28 12:37:12 crc kubenswrapper[4804]: I0128 12:37:12.838176 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmsxq" event={"ID":"d6565976-3a91-4cc5-9fb6-e564382fdf6e","Type":"ContainerDied","Data":"e647a87f942c62635415ff89de2b9477ff0f1f887329894c48333d06fed69430"} Jan 28 12:37:15 crc kubenswrapper[4804]: I0128 12:37:15.861865 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmsxq" event={"ID":"d6565976-3a91-4cc5-9fb6-e564382fdf6e","Type":"ContainerStarted","Data":"844c3c2fc4360340d06d78304eb6b4ae9316e93e1e22c6fdfe05d77608bd17e7"} Jan 28 12:37:15 crc kubenswrapper[4804]: I0128 12:37:15.884449 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hmsxq" podStartSLOduration=2.578927496 podStartE2EDuration="7.884433695s" podCreationTimestamp="2026-01-28 12:37:08 +0000 UTC" firstStartedPulling="2026-01-28 12:37:09.809555899 +0000 UTC m=+4505.604435883" lastFinishedPulling="2026-01-28 12:37:15.115062098 +0000 UTC m=+4510.909942082" observedRunningTime="2026-01-28 12:37:15.884204558 +0000 UTC m=+4511.679084542" watchObservedRunningTime="2026-01-28 12:37:15.884433695 +0000 UTC m=+4511.679313679" Jan 28 12:37:16 crc kubenswrapper[4804]: I0128 12:37:16.868919 4804 generic.go:334] "Generic (PLEG): container finished" podID="f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9" containerID="90c5f0bbcbefe5088f0189f9caee3989fd1a20cf07489f8da5a3c183a3d9185c" exitCode=0 Jan 28 12:37:16 crc kubenswrapper[4804]: I0128 12:37:16.869042 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46g75" event={"ID":"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9","Type":"ContainerDied","Data":"90c5f0bbcbefe5088f0189f9caee3989fd1a20cf07489f8da5a3c183a3d9185c"} Jan 28 12:37:17 crc kubenswrapper[4804]: I0128 12:37:17.879637 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-46g75" event={"ID":"f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9","Type":"ContainerStarted","Data":"e5d55ae93d4218549d90610e7302adc6e228eb1b32b5a4968b4abcf1095f9d0d"} Jan 28 12:37:17 crc kubenswrapper[4804]: I0128 12:37:17.909520 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-46g75" podStartSLOduration=2.477683887 podStartE2EDuration="8.909502704s" podCreationTimestamp="2026-01-28 12:37:09 +0000 UTC" firstStartedPulling="2026-01-28 12:37:10.988910036 +0000 UTC m=+4506.783790040" lastFinishedPulling="2026-01-28 12:37:17.420728873 +0000 UTC m=+4513.215608857" observedRunningTime="2026-01-28 12:37:17.90585033 +0000 UTC m=+4513.700730334" watchObservedRunningTime="2026-01-28 12:37:17.909502704 +0000 UTC m=+4513.704382688" Jan 28 12:37:18 crc kubenswrapper[4804]: I0128 12:37:18.850098 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:18 crc kubenswrapper[4804]: I0128 12:37:18.850154 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.203539 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c4h9f/must-gather-8j4f9"] Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.204937 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4h9f/must-gather-8j4f9" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.211606 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-c4h9f"/"openshift-service-ca.crt" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.212050 4804 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-c4h9f"/"default-dockercfg-mdkfw" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.212182 4804 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-c4h9f"/"kube-root-ca.crt" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.214251 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-c4h9f/must-gather-8j4f9"] Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.253591 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctfdr\" (UniqueName: \"kubernetes.io/projected/0d220da7-e30a-4dde-9ae8-c10ada1875f8-kube-api-access-ctfdr\") pod \"must-gather-8j4f9\" (UID: \"0d220da7-e30a-4dde-9ae8-c10ada1875f8\") " pod="openshift-must-gather-c4h9f/must-gather-8j4f9" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.253681 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d220da7-e30a-4dde-9ae8-c10ada1875f8-must-gather-output\") pod \"must-gather-8j4f9\" (UID: \"0d220da7-e30a-4dde-9ae8-c10ada1875f8\") " pod="openshift-must-gather-c4h9f/must-gather-8j4f9" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.354560 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ctfdr\" (UniqueName: \"kubernetes.io/projected/0d220da7-e30a-4dde-9ae8-c10ada1875f8-kube-api-access-ctfdr\") pod \"must-gather-8j4f9\" (UID: \"0d220da7-e30a-4dde-9ae8-c10ada1875f8\") " pod="openshift-must-gather-c4h9f/must-gather-8j4f9" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.354636 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d220da7-e30a-4dde-9ae8-c10ada1875f8-must-gather-output\") pod \"must-gather-8j4f9\" (UID: \"0d220da7-e30a-4dde-9ae8-c10ada1875f8\") " pod="openshift-must-gather-c4h9f/must-gather-8j4f9" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.355174 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d220da7-e30a-4dde-9ae8-c10ada1875f8-must-gather-output\") pod \"must-gather-8j4f9\" (UID: \"0d220da7-e30a-4dde-9ae8-c10ada1875f8\") " pod="openshift-must-gather-c4h9f/must-gather-8j4f9" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.390365 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctfdr\" (UniqueName: \"kubernetes.io/projected/0d220da7-e30a-4dde-9ae8-c10ada1875f8-kube-api-access-ctfdr\") pod \"must-gather-8j4f9\" (UID: \"0d220da7-e30a-4dde-9ae8-c10ada1875f8\") " pod="openshift-must-gather-c4h9f/must-gather-8j4f9" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.523408 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4h9f/must-gather-8j4f9" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.887952 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.888357 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.891129 4804 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hmsxq" podUID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerName="registry-server" probeResult="failure" output=< Jan 28 12:37:19 crc kubenswrapper[4804]: timeout: failed to connect service ":50051" within 1s Jan 28 12:37:19 crc kubenswrapper[4804]: > Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.936316 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:19 crc kubenswrapper[4804]: I0128 12:37:19.940301 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-c4h9f/must-gather-8j4f9"] Jan 28 12:37:19 crc kubenswrapper[4804]: W0128 12:37:19.941814 4804 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0d220da7_e30a_4dde_9ae8_c10ada1875f8.slice/crio-33d34ee92faf50587b7e65e2f7e9616c9c64a31e488d12355c415a96188619c9 WatchSource:0}: Error finding container 33d34ee92faf50587b7e65e2f7e9616c9c64a31e488d12355c415a96188619c9: Status 404 returned error can't find the container with id 33d34ee92faf50587b7e65e2f7e9616c9c64a31e488d12355c415a96188619c9 Jan 28 12:37:20 crc kubenswrapper[4804]: I0128 12:37:20.909409 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4h9f/must-gather-8j4f9" event={"ID":"0d220da7-e30a-4dde-9ae8-c10ada1875f8","Type":"ContainerStarted","Data":"33d34ee92faf50587b7e65e2f7e9616c9c64a31e488d12355c415a96188619c9"} Jan 28 12:37:26 crc kubenswrapper[4804]: I0128 12:37:26.915405 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:37:26 crc kubenswrapper[4804]: E0128 12:37:26.916177 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:37:28 crc kubenswrapper[4804]: I0128 12:37:28.890703 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:28 crc kubenswrapper[4804]: I0128 12:37:28.939947 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:29 crc kubenswrapper[4804]: I0128 12:37:29.933064 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-46g75" Jan 28 12:37:29 crc kubenswrapper[4804]: I0128 12:37:29.984517 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4h9f/must-gather-8j4f9" event={"ID":"0d220da7-e30a-4dde-9ae8-c10ada1875f8","Type":"ContainerStarted","Data":"c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f"} Jan 28 12:37:30 crc kubenswrapper[4804]: I0128 12:37:30.362128 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-46g75"] Jan 28 12:37:30 crc kubenswrapper[4804]: I0128 12:37:30.531157 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hmsxq"] Jan 28 12:37:30 crc kubenswrapper[4804]: I0128 12:37:30.531411 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hmsxq" podUID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerName="registry-server" containerID="cri-o://844c3c2fc4360340d06d78304eb6b4ae9316e93e1e22c6fdfe05d77608bd17e7" gracePeriod=2 Jan 28 12:37:30 crc kubenswrapper[4804]: I0128 12:37:30.720073 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8n6zc"] Jan 28 12:37:30 crc kubenswrapper[4804]: I0128 12:37:30.720922 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8n6zc" podUID="477f5ec7-c491-494c-add6-a233798ffdfa" containerName="registry-server" containerID="cri-o://8097ea45070d38453a6edb261d8ee6d04408f9d4cf265b8d012cfbbcf0aab862" gracePeriod=2 Jan 28 12:37:30 crc kubenswrapper[4804]: I0128 12:37:30.993582 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4h9f/must-gather-8j4f9" event={"ID":"0d220da7-e30a-4dde-9ae8-c10ada1875f8","Type":"ContainerStarted","Data":"d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480"} Jan 28 12:37:31 crc kubenswrapper[4804]: I0128 12:37:31.013237 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-c4h9f/must-gather-8j4f9" podStartSLOduration=2.311081938 podStartE2EDuration="12.01320827s" podCreationTimestamp="2026-01-28 12:37:19 +0000 UTC" firstStartedPulling="2026-01-28 12:37:19.944227674 +0000 UTC m=+4515.739107658" lastFinishedPulling="2026-01-28 12:37:29.646354006 +0000 UTC m=+4525.441233990" observedRunningTime="2026-01-28 12:37:31.007717708 +0000 UTC m=+4526.802597682" watchObservedRunningTime="2026-01-28 12:37:31.01320827 +0000 UTC m=+4526.808088254" Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.004120 4804 generic.go:334] "Generic (PLEG): container finished" podID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerID="844c3c2fc4360340d06d78304eb6b4ae9316e93e1e22c6fdfe05d77608bd17e7" exitCode=0 Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.004196 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmsxq" event={"ID":"d6565976-3a91-4cc5-9fb6-e564382fdf6e","Type":"ContainerDied","Data":"844c3c2fc4360340d06d78304eb6b4ae9316e93e1e22c6fdfe05d77608bd17e7"} Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.007005 4804 generic.go:334] "Generic (PLEG): container finished" podID="477f5ec7-c491-494c-add6-a233798ffdfa" containerID="8097ea45070d38453a6edb261d8ee6d04408f9d4cf265b8d012cfbbcf0aab862" exitCode=0 Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.007070 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8n6zc" event={"ID":"477f5ec7-c491-494c-add6-a233798ffdfa","Type":"ContainerDied","Data":"8097ea45070d38453a6edb261d8ee6d04408f9d4cf265b8d012cfbbcf0aab862"} Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.543987 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.574835 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-catalog-content\") pod \"477f5ec7-c491-494c-add6-a233798ffdfa\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.574991 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz5ns\" (UniqueName: \"kubernetes.io/projected/477f5ec7-c491-494c-add6-a233798ffdfa-kube-api-access-jz5ns\") pod \"477f5ec7-c491-494c-add6-a233798ffdfa\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.576071 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-utilities\") pod \"477f5ec7-c491-494c-add6-a233798ffdfa\" (UID: \"477f5ec7-c491-494c-add6-a233798ffdfa\") " Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.576425 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-utilities" (OuterVolumeSpecName: "utilities") pod "477f5ec7-c491-494c-add6-a233798ffdfa" (UID: "477f5ec7-c491-494c-add6-a233798ffdfa"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.591705 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/477f5ec7-c491-494c-add6-a233798ffdfa-kube-api-access-jz5ns" (OuterVolumeSpecName: "kube-api-access-jz5ns") pod "477f5ec7-c491-494c-add6-a233798ffdfa" (UID: "477f5ec7-c491-494c-add6-a233798ffdfa"). InnerVolumeSpecName "kube-api-access-jz5ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.641282 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "477f5ec7-c491-494c-add6-a233798ffdfa" (UID: "477f5ec7-c491-494c-add6-a233798ffdfa"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.680947 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.681399 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/477f5ec7-c491-494c-add6-a233798ffdfa-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:37:32 crc kubenswrapper[4804]: I0128 12:37:32.681417 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz5ns\" (UniqueName: \"kubernetes.io/projected/477f5ec7-c491-494c-add6-a233798ffdfa-kube-api-access-jz5ns\") on node \"crc\" DevicePath \"\"" Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.025109 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8n6zc" event={"ID":"477f5ec7-c491-494c-add6-a233798ffdfa","Type":"ContainerDied","Data":"5666d32cb1791e32eb7a0f138a32a6994ac7508322ed412acb7afd87a03dcb18"} Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.025168 4804 scope.go:117] "RemoveContainer" containerID="8097ea45070d38453a6edb261d8ee6d04408f9d4cf265b8d012cfbbcf0aab862" Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.025291 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8n6zc" Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.057128 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8n6zc"] Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.067738 4804 scope.go:117] "RemoveContainer" containerID="5eeef8445a28c47bafd383bf532c0bbf3abc3e3acbe80741d1fb008b29abd5a7" Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.076111 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8n6zc"] Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.456507 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.465042 4804 scope.go:117] "RemoveContainer" containerID="97869d81e8512d2767849c948a0eaf69907f795ddaf291cb6977a857a679da98" Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.597077 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-catalog-content\") pod \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.597252 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-utilities\") pod \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.597355 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xh9r9\" (UniqueName: \"kubernetes.io/projected/d6565976-3a91-4cc5-9fb6-e564382fdf6e-kube-api-access-xh9r9\") pod \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\" (UID: \"d6565976-3a91-4cc5-9fb6-e564382fdf6e\") " Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.598323 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-utilities" (OuterVolumeSpecName: "utilities") pod "d6565976-3a91-4cc5-9fb6-e564382fdf6e" (UID: "d6565976-3a91-4cc5-9fb6-e564382fdf6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.606137 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d6565976-3a91-4cc5-9fb6-e564382fdf6e-kube-api-access-xh9r9" (OuterVolumeSpecName: "kube-api-access-xh9r9") pod "d6565976-3a91-4cc5-9fb6-e564382fdf6e" (UID: "d6565976-3a91-4cc5-9fb6-e564382fdf6e"). InnerVolumeSpecName "kube-api-access-xh9r9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.699230 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.699595 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xh9r9\" (UniqueName: \"kubernetes.io/projected/d6565976-3a91-4cc5-9fb6-e564382fdf6e-kube-api-access-xh9r9\") on node \"crc\" DevicePath \"\"" Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.740483 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d6565976-3a91-4cc5-9fb6-e564382fdf6e" (UID: "d6565976-3a91-4cc5-9fb6-e564382fdf6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:37:33 crc kubenswrapper[4804]: I0128 12:37:33.801399 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d6565976-3a91-4cc5-9fb6-e564382fdf6e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:37:34 crc kubenswrapper[4804]: I0128 12:37:34.035315 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmsxq" event={"ID":"d6565976-3a91-4cc5-9fb6-e564382fdf6e","Type":"ContainerDied","Data":"dcdf674db3a717933ef61b4b228718afbf102de6aef7a1d4dcfe349fcd4ff1b6"} Jan 28 12:37:34 crc kubenswrapper[4804]: I0128 12:37:34.035377 4804 scope.go:117] "RemoveContainer" containerID="844c3c2fc4360340d06d78304eb6b4ae9316e93e1e22c6fdfe05d77608bd17e7" Jan 28 12:37:34 crc kubenswrapper[4804]: I0128 12:37:34.035329 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmsxq" Jan 28 12:37:34 crc kubenswrapper[4804]: I0128 12:37:34.064237 4804 scope.go:117] "RemoveContainer" containerID="e647a87f942c62635415ff89de2b9477ff0f1f887329894c48333d06fed69430" Jan 28 12:37:34 crc kubenswrapper[4804]: I0128 12:37:34.070950 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hmsxq"] Jan 28 12:37:34 crc kubenswrapper[4804]: I0128 12:37:34.079106 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hmsxq"] Jan 28 12:37:34 crc kubenswrapper[4804]: I0128 12:37:34.091103 4804 scope.go:117] "RemoveContainer" containerID="60fe76a65de41cce8c367c6ffab4aa6f356b514a9d3158b59aab700a311236f8" Jan 28 12:37:34 crc kubenswrapper[4804]: I0128 12:37:34.925906 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="477f5ec7-c491-494c-add6-a233798ffdfa" path="/var/lib/kubelet/pods/477f5ec7-c491-494c-add6-a233798ffdfa/volumes" Jan 28 12:37:34 crc kubenswrapper[4804]: I0128 12:37:34.926837 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" path="/var/lib/kubelet/pods/d6565976-3a91-4cc5-9fb6-e564382fdf6e/volumes" Jan 28 12:37:38 crc kubenswrapper[4804]: I0128 12:37:38.915959 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:37:38 crc kubenswrapper[4804]: E0128 12:37:38.916691 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:37:52 crc kubenswrapper[4804]: I0128 12:37:52.916584 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:37:52 crc kubenswrapper[4804]: E0128 12:37:52.918526 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:38:06 crc kubenswrapper[4804]: I0128 12:38:06.915160 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:38:06 crc kubenswrapper[4804]: E0128 12:38:06.916063 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:38:21 crc kubenswrapper[4804]: I0128 12:38:21.915711 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:38:21 crc kubenswrapper[4804]: E0128 12:38:21.916476 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:38:32 crc kubenswrapper[4804]: I0128 12:38:32.957805 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s_490a3033-f3bb-4a92-a03e-03ada6af8280/util/0.log" Jan 28 12:38:33 crc kubenswrapper[4804]: I0128 12:38:33.149981 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s_490a3033-f3bb-4a92-a03e-03ada6af8280/util/0.log" Jan 28 12:38:33 crc kubenswrapper[4804]: I0128 12:38:33.167078 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s_490a3033-f3bb-4a92-a03e-03ada6af8280/pull/0.log" Jan 28 12:38:33 crc kubenswrapper[4804]: I0128 12:38:33.167145 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s_490a3033-f3bb-4a92-a03e-03ada6af8280/pull/0.log" Jan 28 12:38:33 crc kubenswrapper[4804]: I0128 12:38:33.339727 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s_490a3033-f3bb-4a92-a03e-03ada6af8280/extract/0.log" Jan 28 12:38:33 crc kubenswrapper[4804]: I0128 12:38:33.354588 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s_490a3033-f3bb-4a92-a03e-03ada6af8280/pull/0.log" Jan 28 12:38:33 crc kubenswrapper[4804]: I0128 12:38:33.374970 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_805fc9ef0d2b548e6e9f89423c5ae12a14f172a831ae52a5fa8a16a4e2sxm8s_490a3033-f3bb-4a92-a03e-03ada6af8280/util/0.log" Jan 28 12:38:33 crc kubenswrapper[4804]: I0128 12:38:33.572822 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-7b6c4d8c5f-vjb6d_c36b33fc-3ff6-4c44-a079-bc48a5a3d509/manager/0.log" Jan 28 12:38:33 crc kubenswrapper[4804]: I0128 12:38:33.645006 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-8d874c8fc-j5j86_db8796b2-e360-4287-9ba2-4ceda6de770e/manager/0.log" Jan 28 12:38:33 crc kubenswrapper[4804]: I0128 12:38:33.695378 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-6d9697b7f4-fbggh_b14a4da9-54a6-4a7c-bd0d-3cf9cd05d048/manager/0.log" Jan 28 12:38:33 crc kubenswrapper[4804]: I0128 12:38:33.886818 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-8886f4c47-qz2dl_186e63a0-88e6-404b-963c-e5cb22485277/manager/0.log" Jan 28 12:38:33 crc kubenswrapper[4804]: I0128 12:38:33.899699 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-69d6db494d-hxv8b_acdcc5e8-c284-444e-86c2-96aec766b35b/manager/0.log" Jan 28 12:38:34 crc kubenswrapper[4804]: I0128 12:38:34.183360 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-5fb775575f-fw9dq_ba3d9f70-1d55-4ca1-a36f-19047f0a9a6d/manager/0.log" Jan 28 12:38:34 crc kubenswrapper[4804]: I0128 12:38:34.464235 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-79955696d6-wb5k2_f75f08ff-7d3c-4fb4-a366-1c996771a71d/manager/0.log" Jan 28 12:38:34 crc kubenswrapper[4804]: I0128 12:38:34.520336 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5f4b8bd54d-k6rzx_e770ba97-59e1-4752-8e93-bc7d53ff7c04/manager/0.log" Jan 28 12:38:34 crc kubenswrapper[4804]: I0128 12:38:34.630947 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-7dd968899f-wl5w5_ec1046a1-b834-40e4-b82a-923885428171/manager/0.log" Jan 28 12:38:34 crc kubenswrapper[4804]: I0128 12:38:34.734763 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-84f48565d4-s92b7_d5ce0c1e-3061-46ed-a816-3839144b160a/manager/0.log" Jan 28 12:38:34 crc kubenswrapper[4804]: I0128 12:38:34.859676 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-67bf948998-7dg9l_07990c6c-3350-45a8-85de-1e0db97acb07/manager/0.log" Jan 28 12:38:34 crc kubenswrapper[4804]: I0128 12:38:34.919827 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:38:34 crc kubenswrapper[4804]: E0128 12:38:34.920059 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:38:34 crc kubenswrapper[4804]: I0128 12:38:34.980145 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-585dbc889-n9kpn_b79b961c-583d-4e78-8513-c44ed292c325/manager/0.log" Jan 28 12:38:35 crc kubenswrapper[4804]: I0128 12:38:35.115097 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-6687f8d877-m5xng_8f1a2428-c6c8-4113-9654-0c58ab91b45b/manager/0.log" Jan 28 12:38:35 crc kubenswrapper[4804]: I0128 12:38:35.182739 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-55bff696bd-dndv5_8c7ff5ff-8c23-46f4-9ba6-dda63fa9cce1/manager/0.log" Jan 28 12:38:35 crc kubenswrapper[4804]: I0128 12:38:35.262948 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-59c4b45c4dtnlcg_a26075bd-4d23-463a-abe8-575a02ebc9ad/manager/0.log" Jan 28 12:38:35 crc kubenswrapper[4804]: I0128 12:38:35.481766 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-init-cdb5b4f99-hxlm9_134135c7-1032-47aa-b0bd-361463826caf/operator/0.log" Jan 28 12:38:35 crc kubenswrapper[4804]: I0128 12:38:35.666858 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-cmjpc_d2e56e8b-cbb7-4f17-88df-dbe1f92e9cec/registry-server/0.log" Jan 28 12:38:35 crc kubenswrapper[4804]: I0128 12:38:35.938688 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-788c46999f-4cpk5_7ab2436a-1b54-4c5e-bdc1-959026660c98/manager/0.log" Jan 28 12:38:36 crc kubenswrapper[4804]: I0128 12:38:36.045459 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-5b964cf4cd-bfl45_deece2f8-8c1c-4599-80f4-44e6ec055a18/manager/0.log" Jan 28 12:38:36 crc kubenswrapper[4804]: I0128 12:38:36.205055 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-668c99d594-cqlch_69938639-9ff0-433c-bd73-8d129935e7d4/operator/0.log" Jan 28 12:38:36 crc kubenswrapper[4804]: I0128 12:38:36.302719 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-6548796f98-5pssc_58f748c2-ceb6-4d34-8a2e-8227e59ef560/manager/0.log" Jan 28 12:38:36 crc kubenswrapper[4804]: I0128 12:38:36.400686 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-68fc8c869-fwd68_eb1c01a9-6548-49cd-8e1f-4f01daaff754/manager/0.log" Jan 28 12:38:36 crc kubenswrapper[4804]: I0128 12:38:36.512984 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-64b5b76f97-2hdgj_23a10136-5079-4838-adf9-6512ccfd5f2c/manager/0.log" Jan 28 12:38:36 crc kubenswrapper[4804]: I0128 12:38:36.587969 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-56f8bfcd9f-9vgvb_ff35634f-2b61-44e4-934a-74b39c5b7335/manager/0.log" Jan 28 12:38:36 crc kubenswrapper[4804]: I0128 12:38:36.676561 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-564965969-659wf_67fbb1e9-d718-4075-971a-33a245c498e3/manager/0.log" Jan 28 12:38:49 crc kubenswrapper[4804]: I0128 12:38:49.915274 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:38:49 crc kubenswrapper[4804]: E0128 12:38:49.916175 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:38:54 crc kubenswrapper[4804]: I0128 12:38:54.059635 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-f822b_c03ebf08-d5a0-48b4-a1ca-3eec30c14490/control-plane-machine-set-operator/0.log" Jan 28 12:38:54 crc kubenswrapper[4804]: I0128 12:38:54.235787 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-m5p7p_e2b8b707-60c9-4138-a4d8-d218162737fe/kube-rbac-proxy/0.log" Jan 28 12:38:54 crc kubenswrapper[4804]: I0128 12:38:54.276564 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-m5p7p_e2b8b707-60c9-4138-a4d8-d218162737fe/machine-api-operator/0.log" Jan 28 12:39:03 crc kubenswrapper[4804]: I0128 12:39:03.915262 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:39:03 crc kubenswrapper[4804]: E0128 12:39:03.916022 4804 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-slkk8_openshift-machine-config-operator(d901be89-84b0-4249-9548-2e626a112a4c)\"" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" Jan 28 12:39:05 crc kubenswrapper[4804]: I0128 12:39:05.385044 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-hkwds_4da2c74c-883d-4690-bb94-a34b198ccf89/cert-manager-controller/0.log" Jan 28 12:39:05 crc kubenswrapper[4804]: I0128 12:39:05.539328 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-pgj92_47a0c933-7194-403d-8345-446cc9941fa5/cert-manager-cainjector/0.log" Jan 28 12:39:05 crc kubenswrapper[4804]: I0128 12:39:05.557942 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-cjsz8_dd7c8a18-36d1-45d5-aaf5-daff9b218438/cert-manager-webhook/0.log" Jan 28 12:39:16 crc kubenswrapper[4804]: I0128 12:39:16.737167 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-7754f76f8b-bbn52_77313f93-489e-4da6-81bb-eec0c795e242/nmstate-console-plugin/0.log" Jan 28 12:39:16 crc kubenswrapper[4804]: I0128 12:39:16.876169 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-r6vm7_a741d157-784a-4e3e-9e35-200d91f3aa47/nmstate-handler/0.log" Jan 28 12:39:16 crc kubenswrapper[4804]: I0128 12:39:16.914442 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-b2pq8_b63500d6-29e0-4eef-82cd-fdc0036ef0f2/kube-rbac-proxy/0.log" Jan 28 12:39:16 crc kubenswrapper[4804]: I0128 12:39:16.974921 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-54757c584b-b2pq8_b63500d6-29e0-4eef-82cd-fdc0036ef0f2/nmstate-metrics/0.log" Jan 28 12:39:17 crc kubenswrapper[4804]: I0128 12:39:17.076162 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-646758c888-hzhkh_d478ae3c-a9f5-4f6e-ae30-1bd80027de73/nmstate-operator/0.log" Jan 28 12:39:17 crc kubenswrapper[4804]: I0128 12:39:17.153917 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-8474b5b9d8-c5t8z_c17b2105-0264-4cf3-8204-e68ba577728e/nmstate-webhook/0.log" Jan 28 12:39:18 crc kubenswrapper[4804]: I0128 12:39:18.915927 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:39:19 crc kubenswrapper[4804]: I0128 12:39:19.877434 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"3d962111bbfc2c2fbb1c4945c75c4824c37b7d2f4b777899d9eff306e2dd21a4"} Jan 28 12:39:41 crc kubenswrapper[4804]: I0128 12:39:41.965177 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-rfhfx_1ae74e9e-799f-46bb-9a53-c8307c83203d/kube-rbac-proxy/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.222145 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-frr-files/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.309972 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-rfhfx_1ae74e9e-799f-46bb-9a53-c8307c83203d/controller/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.404217 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-reloader/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.436929 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-frr-files/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.437637 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-metrics/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.477975 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-reloader/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.643668 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-frr-files/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.665559 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-reloader/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.668307 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-metrics/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.686693 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-metrics/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.820576 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-frr-files/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.838258 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-reloader/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.865983 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/cp-metrics/0.log" Jan 28 12:39:42 crc kubenswrapper[4804]: I0128 12:39:42.870028 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/controller/0.log" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.012626 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/frr-metrics/0.log" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.073502 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/kube-rbac-proxy/0.log" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.098481 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/kube-rbac-proxy-frr/0.log" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.189641 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-t6tht"] Jan 28 12:39:43 crc kubenswrapper[4804]: E0128 12:39:43.190034 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="477f5ec7-c491-494c-add6-a233798ffdfa" containerName="extract-utilities" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.190055 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="477f5ec7-c491-494c-add6-a233798ffdfa" containerName="extract-utilities" Jan 28 12:39:43 crc kubenswrapper[4804]: E0128 12:39:43.190089 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerName="registry-server" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.190097 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerName="registry-server" Jan 28 12:39:43 crc kubenswrapper[4804]: E0128 12:39:43.190117 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerName="extract-utilities" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.190126 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerName="extract-utilities" Jan 28 12:39:43 crc kubenswrapper[4804]: E0128 12:39:43.190139 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="477f5ec7-c491-494c-add6-a233798ffdfa" containerName="registry-server" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.190145 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="477f5ec7-c491-494c-add6-a233798ffdfa" containerName="registry-server" Jan 28 12:39:43 crc kubenswrapper[4804]: E0128 12:39:43.190158 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerName="extract-content" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.190163 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerName="extract-content" Jan 28 12:39:43 crc kubenswrapper[4804]: E0128 12:39:43.190174 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="477f5ec7-c491-494c-add6-a233798ffdfa" containerName="extract-content" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.190180 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="477f5ec7-c491-494c-add6-a233798ffdfa" containerName="extract-content" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.190342 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="477f5ec7-c491-494c-add6-a233798ffdfa" containerName="registry-server" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.190358 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="d6565976-3a91-4cc5-9fb6-e564382fdf6e" containerName="registry-server" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.191541 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.203979 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t6tht"] Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.246249 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/reloader/0.log" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.338590 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-catalog-content\") pod \"community-operators-t6tht\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.338680 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-utilities\") pod \"community-operators-t6tht\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.338896 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djgj4\" (UniqueName: \"kubernetes.io/projected/d83e9c26-344d-455c-bb51-d378c8016381-kube-api-access-djgj4\") pod \"community-operators-t6tht\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.410709 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-cvlt6_3ce00c89-f00d-43aa-9907-77bf331c3dbd/frr-k8s-webhook-server/0.log" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.439658 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-utilities\") pod \"community-operators-t6tht\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.439732 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djgj4\" (UniqueName: \"kubernetes.io/projected/d83e9c26-344d-455c-bb51-d378c8016381-kube-api-access-djgj4\") pod \"community-operators-t6tht\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.439818 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-catalog-content\") pod \"community-operators-t6tht\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.440179 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-utilities\") pod \"community-operators-t6tht\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.440227 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-catalog-content\") pod \"community-operators-t6tht\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.458619 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djgj4\" (UniqueName: \"kubernetes.io/projected/d83e9c26-344d-455c-bb51-d378c8016381-kube-api-access-djgj4\") pod \"community-operators-t6tht\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.513352 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.745486 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6b85b59588-rf4wr_a0eda12d-b723-4a3a-8f2b-916de07b279c/manager/0.log" Jan 28 12:39:43 crc kubenswrapper[4804]: I0128 12:39:43.926179 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6b844cd4fc-mn427_13606290-8fc4-4792-a328-207ee9a1994e/webhook-server/0.log" Jan 28 12:39:44 crc kubenswrapper[4804]: I0128 12:39:44.067118 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-t6tht"] Jan 28 12:39:44 crc kubenswrapper[4804]: I0128 12:39:44.181726 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-5kdlz_45631116-4b02-448f-9158-18eaae682d9d/frr/0.log" Jan 28 12:39:44 crc kubenswrapper[4804]: I0128 12:39:44.220904 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kcvj8_2fa1df7e-03c8-4931-ad89-222acae36030/kube-rbac-proxy/0.log" Jan 28 12:39:44 crc kubenswrapper[4804]: I0128 12:39:44.617857 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-kcvj8_2fa1df7e-03c8-4931-ad89-222acae36030/speaker/0.log" Jan 28 12:39:45 crc kubenswrapper[4804]: I0128 12:39:45.034272 4804 generic.go:334] "Generic (PLEG): container finished" podID="d83e9c26-344d-455c-bb51-d378c8016381" containerID="2fb40e1a7f96a2260135cc99304b2d4660c3187606d1109f049bf90290cdaa17" exitCode=0 Jan 28 12:39:45 crc kubenswrapper[4804]: I0128 12:39:45.034321 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6tht" event={"ID":"d83e9c26-344d-455c-bb51-d378c8016381","Type":"ContainerDied","Data":"2fb40e1a7f96a2260135cc99304b2d4660c3187606d1109f049bf90290cdaa17"} Jan 28 12:39:45 crc kubenswrapper[4804]: I0128 12:39:45.034349 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6tht" event={"ID":"d83e9c26-344d-455c-bb51-d378c8016381","Type":"ContainerStarted","Data":"78338d6312df0542bef470ab0fb2807b259b1b7b922c83ea5be1638d62c31969"} Jan 28 12:39:46 crc kubenswrapper[4804]: I0128 12:39:46.042290 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6tht" event={"ID":"d83e9c26-344d-455c-bb51-d378c8016381","Type":"ContainerStarted","Data":"1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0"} Jan 28 12:39:47 crc kubenswrapper[4804]: I0128 12:39:47.050037 4804 generic.go:334] "Generic (PLEG): container finished" podID="d83e9c26-344d-455c-bb51-d378c8016381" containerID="1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0" exitCode=0 Jan 28 12:39:47 crc kubenswrapper[4804]: I0128 12:39:47.050089 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6tht" event={"ID":"d83e9c26-344d-455c-bb51-d378c8016381","Type":"ContainerDied","Data":"1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0"} Jan 28 12:39:48 crc kubenswrapper[4804]: I0128 12:39:48.059840 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6tht" event={"ID":"d83e9c26-344d-455c-bb51-d378c8016381","Type":"ContainerStarted","Data":"57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd"} Jan 28 12:39:48 crc kubenswrapper[4804]: I0128 12:39:48.083484 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-t6tht" podStartSLOduration=2.539196473 podStartE2EDuration="5.083466403s" podCreationTimestamp="2026-01-28 12:39:43 +0000 UTC" firstStartedPulling="2026-01-28 12:39:45.036207909 +0000 UTC m=+4660.831087893" lastFinishedPulling="2026-01-28 12:39:47.580477839 +0000 UTC m=+4663.375357823" observedRunningTime="2026-01-28 12:39:48.078691455 +0000 UTC m=+4663.873571449" watchObservedRunningTime="2026-01-28 12:39:48.083466403 +0000 UTC m=+4663.878346387" Jan 28 12:39:53 crc kubenswrapper[4804]: I0128 12:39:53.514342 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:53 crc kubenswrapper[4804]: I0128 12:39:53.514866 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:53 crc kubenswrapper[4804]: I0128 12:39:53.562913 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:54 crc kubenswrapper[4804]: I0128 12:39:54.138020 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:54 crc kubenswrapper[4804]: I0128 12:39:54.177895 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t6tht"] Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.114159 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-t6tht" podUID="d83e9c26-344d-455c-bb51-d378c8016381" containerName="registry-server" containerID="cri-o://57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd" gracePeriod=2 Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.581867 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.621603 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-utilities\") pod \"d83e9c26-344d-455c-bb51-d378c8016381\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.621667 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-catalog-content\") pod \"d83e9c26-344d-455c-bb51-d378c8016381\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.621695 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djgj4\" (UniqueName: \"kubernetes.io/projected/d83e9c26-344d-455c-bb51-d378c8016381-kube-api-access-djgj4\") pod \"d83e9c26-344d-455c-bb51-d378c8016381\" (UID: \"d83e9c26-344d-455c-bb51-d378c8016381\") " Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.622746 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-utilities" (OuterVolumeSpecName: "utilities") pod "d83e9c26-344d-455c-bb51-d378c8016381" (UID: "d83e9c26-344d-455c-bb51-d378c8016381"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.629969 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d83e9c26-344d-455c-bb51-d378c8016381-kube-api-access-djgj4" (OuterVolumeSpecName: "kube-api-access-djgj4") pod "d83e9c26-344d-455c-bb51-d378c8016381" (UID: "d83e9c26-344d-455c-bb51-d378c8016381"). InnerVolumeSpecName "kube-api-access-djgj4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.651196 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6_237e3a43-08f5-4b3c-864f-d5f90276bac3/util/0.log" Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.700658 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d83e9c26-344d-455c-bb51-d378c8016381" (UID: "d83e9c26-344d-455c-bb51-d378c8016381"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.723218 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.723259 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djgj4\" (UniqueName: \"kubernetes.io/projected/d83e9c26-344d-455c-bb51-d378c8016381-kube-api-access-djgj4\") on node \"crc\" DevicePath \"\"" Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.723275 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d83e9c26-344d-455c-bb51-d378c8016381-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.807942 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6_237e3a43-08f5-4b3c-864f-d5f90276bac3/util/0.log" Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.839210 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6_237e3a43-08f5-4b3c-864f-d5f90276bac3/pull/0.log" Jan 28 12:39:56 crc kubenswrapper[4804]: I0128 12:39:56.922601 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6_237e3a43-08f5-4b3c-864f-d5f90276bac3/pull/0.log" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.048042 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6_237e3a43-08f5-4b3c-864f-d5f90276bac3/util/0.log" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.081384 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6_237e3a43-08f5-4b3c-864f-d5f90276bac3/pull/0.log" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.087019 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a99ct6_237e3a43-08f5-4b3c-864f-d5f90276bac3/extract/0.log" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.121858 4804 generic.go:334] "Generic (PLEG): container finished" podID="d83e9c26-344d-455c-bb51-d378c8016381" containerID="57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd" exitCode=0 Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.121928 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6tht" event={"ID":"d83e9c26-344d-455c-bb51-d378c8016381","Type":"ContainerDied","Data":"57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd"} Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.121961 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-t6tht" event={"ID":"d83e9c26-344d-455c-bb51-d378c8016381","Type":"ContainerDied","Data":"78338d6312df0542bef470ab0fb2807b259b1b7b922c83ea5be1638d62c31969"} Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.121978 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-t6tht" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.121985 4804 scope.go:117] "RemoveContainer" containerID="57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.140792 4804 scope.go:117] "RemoveContainer" containerID="1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.144837 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-t6tht"] Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.153457 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-t6tht"] Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.170592 4804 scope.go:117] "RemoveContainer" containerID="2fb40e1a7f96a2260135cc99304b2d4660c3187606d1109f049bf90290cdaa17" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.191348 4804 scope.go:117] "RemoveContainer" containerID="57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd" Jan 28 12:39:57 crc kubenswrapper[4804]: E0128 12:39:57.191808 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd\": container with ID starting with 57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd not found: ID does not exist" containerID="57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.191861 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd"} err="failed to get container status \"57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd\": rpc error: code = NotFound desc = could not find container \"57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd\": container with ID starting with 57c67194b17d5c50b6c26d8004eb81cc259fff6ef7b2719349500a28605a8bbd not found: ID does not exist" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.191904 4804 scope.go:117] "RemoveContainer" containerID="1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0" Jan 28 12:39:57 crc kubenswrapper[4804]: E0128 12:39:57.192378 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0\": container with ID starting with 1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0 not found: ID does not exist" containerID="1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.192414 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0"} err="failed to get container status \"1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0\": rpc error: code = NotFound desc = could not find container \"1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0\": container with ID starting with 1e1cc082850a75b9ce180bdbae7ba9c33899b8567c3154946ccc2de276c6f1c0 not found: ID does not exist" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.192445 4804 scope.go:117] "RemoveContainer" containerID="2fb40e1a7f96a2260135cc99304b2d4660c3187606d1109f049bf90290cdaa17" Jan 28 12:39:57 crc kubenswrapper[4804]: E0128 12:39:57.195024 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2fb40e1a7f96a2260135cc99304b2d4660c3187606d1109f049bf90290cdaa17\": container with ID starting with 2fb40e1a7f96a2260135cc99304b2d4660c3187606d1109f049bf90290cdaa17 not found: ID does not exist" containerID="2fb40e1a7f96a2260135cc99304b2d4660c3187606d1109f049bf90290cdaa17" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.195066 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2fb40e1a7f96a2260135cc99304b2d4660c3187606d1109f049bf90290cdaa17"} err="failed to get container status \"2fb40e1a7f96a2260135cc99304b2d4660c3187606d1109f049bf90290cdaa17\": rpc error: code = NotFound desc = could not find container \"2fb40e1a7f96a2260135cc99304b2d4660c3187606d1109f049bf90290cdaa17\": container with ID starting with 2fb40e1a7f96a2260135cc99304b2d4660c3187606d1109f049bf90290cdaa17 not found: ID does not exist" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.246229 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh_1c8da098-aace-4ed5-8846-6fff6aee19be/util/0.log" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.380169 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh_1c8da098-aace-4ed5-8846-6fff6aee19be/pull/0.log" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.404129 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh_1c8da098-aace-4ed5-8846-6fff6aee19be/util/0.log" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.404615 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh_1c8da098-aace-4ed5-8846-6fff6aee19be/pull/0.log" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.562346 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh_1c8da098-aace-4ed5-8846-6fff6aee19be/util/0.log" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.566588 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh_1c8da098-aace-4ed5-8846-6fff6aee19be/extract/0.log" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.588773 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dcf8xrh_1c8da098-aace-4ed5-8846-6fff6aee19be/pull/0.log" Jan 28 12:39:57 crc kubenswrapper[4804]: I0128 12:39:57.984382 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc_1622f571-d0d6-4247-b47e-4dda08dea3b3/util/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.138342 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc_1622f571-d0d6-4247-b47e-4dda08dea3b3/pull/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.155067 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc_1622f571-d0d6-4247-b47e-4dda08dea3b3/pull/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.175981 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc_1622f571-d0d6-4247-b47e-4dda08dea3b3/util/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.323606 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc_1622f571-d0d6-4247-b47e-4dda08dea3b3/util/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.353310 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc_1622f571-d0d6-4247-b47e-4dda08dea3b3/extract/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.353562 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_53efe8611d43ac2275911d954e05efbbba7920a530aff9253ed1cec71397rzc_1622f571-d0d6-4247-b47e-4dda08dea3b3/pull/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.492293 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-46g75_f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9/extract-utilities/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.663709 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-46g75_f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9/extract-utilities/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.671349 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-46g75_f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9/extract-content/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.685493 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-46g75_f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9/extract-content/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.819807 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-46g75_f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9/extract-content/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.838377 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-46g75_f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9/extract-utilities/0.log" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.923215 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d83e9c26-344d-455c-bb51-d378c8016381" path="/var/lib/kubelet/pods/d83e9c26-344d-455c-bb51-d378c8016381/volumes" Jan 28 12:39:58 crc kubenswrapper[4804]: I0128 12:39:58.944702 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-46g75_f9ff21c3-9c0c-4c4f-8d8e-4773296a12a9/registry-server/0.log" Jan 28 12:39:59 crc kubenswrapper[4804]: I0128 12:39:59.354901 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wbxgh_91e77bd7-6a7b-4b91-b47d-61e61d157acb/extract-utilities/0.log" Jan 28 12:39:59 crc kubenswrapper[4804]: I0128 12:39:59.460762 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wbxgh_91e77bd7-6a7b-4b91-b47d-61e61d157acb/extract-utilities/0.log" Jan 28 12:39:59 crc kubenswrapper[4804]: I0128 12:39:59.494425 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wbxgh_91e77bd7-6a7b-4b91-b47d-61e61d157acb/extract-content/0.log" Jan 28 12:39:59 crc kubenswrapper[4804]: I0128 12:39:59.516733 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wbxgh_91e77bd7-6a7b-4b91-b47d-61e61d157acb/extract-content/0.log" Jan 28 12:39:59 crc kubenswrapper[4804]: I0128 12:39:59.682687 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wbxgh_91e77bd7-6a7b-4b91-b47d-61e61d157acb/extract-utilities/0.log" Jan 28 12:39:59 crc kubenswrapper[4804]: I0128 12:39:59.718577 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wbxgh_91e77bd7-6a7b-4b91-b47d-61e61d157acb/extract-content/0.log" Jan 28 12:39:59 crc kubenswrapper[4804]: I0128 12:39:59.876763 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-s76k6_349fc9e3-a236-44fd-b7b9-ee08f25c58fd/marketplace-operator/0.log" Jan 28 12:39:59 crc kubenswrapper[4804]: I0128 12:39:59.978896 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mfzfl_7e326a9c-bf0f-4d43-87f0-f4c4e2667118/extract-utilities/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.107363 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mfzfl_7e326a9c-bf0f-4d43-87f0-f4c4e2667118/extract-utilities/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.151049 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mfzfl_7e326a9c-bf0f-4d43-87f0-f4c4e2667118/extract-content/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.163647 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mfzfl_7e326a9c-bf0f-4d43-87f0-f4c4e2667118/extract-content/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.359606 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-wbxgh_91e77bd7-6a7b-4b91-b47d-61e61d157acb/registry-server/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.360755 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mfzfl_7e326a9c-bf0f-4d43-87f0-f4c4e2667118/extract-utilities/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.373305 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mfzfl_7e326a9c-bf0f-4d43-87f0-f4c4e2667118/extract-content/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.518591 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-mfzfl_7e326a9c-bf0f-4d43-87f0-f4c4e2667118/registry-server/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.550242 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hfp4x_64d5e8a4-00e0-4aae-988b-d10e5f36cae7/extract-utilities/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.662262 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hfp4x_64d5e8a4-00e0-4aae-988b-d10e5f36cae7/extract-utilities/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.715186 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hfp4x_64d5e8a4-00e0-4aae-988b-d10e5f36cae7/extract-content/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.715793 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hfp4x_64d5e8a4-00e0-4aae-988b-d10e5f36cae7/extract-content/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.846883 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hfp4x_64d5e8a4-00e0-4aae-988b-d10e5f36cae7/extract-utilities/0.log" Jan 28 12:40:00 crc kubenswrapper[4804]: I0128 12:40:00.869836 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hfp4x_64d5e8a4-00e0-4aae-988b-d10e5f36cae7/extract-content/0.log" Jan 28 12:40:01 crc kubenswrapper[4804]: I0128 12:40:01.504032 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hfp4x_64d5e8a4-00e0-4aae-988b-d10e5f36cae7/registry-server/0.log" Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.685347 4804 generic.go:334] "Generic (PLEG): container finished" podID="0d220da7-e30a-4dde-9ae8-c10ada1875f8" containerID="c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f" exitCode=0 Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.685448 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4h9f/must-gather-8j4f9" event={"ID":"0d220da7-e30a-4dde-9ae8-c10ada1875f8","Type":"ContainerDied","Data":"c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f"} Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.686648 4804 scope.go:117] "RemoveContainer" containerID="c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f" Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.842529 4804 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nlwck"] Jan 28 12:41:16 crc kubenswrapper[4804]: E0128 12:41:16.842869 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83e9c26-344d-455c-bb51-d378c8016381" containerName="registry-server" Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.842906 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83e9c26-344d-455c-bb51-d378c8016381" containerName="registry-server" Jan 28 12:41:16 crc kubenswrapper[4804]: E0128 12:41:16.842938 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83e9c26-344d-455c-bb51-d378c8016381" containerName="extract-content" Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.842947 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83e9c26-344d-455c-bb51-d378c8016381" containerName="extract-content" Jan 28 12:41:16 crc kubenswrapper[4804]: E0128 12:41:16.842964 4804 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d83e9c26-344d-455c-bb51-d378c8016381" containerName="extract-utilities" Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.842971 4804 state_mem.go:107] "Deleted CPUSet assignment" podUID="d83e9c26-344d-455c-bb51-d378c8016381" containerName="extract-utilities" Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.843151 4804 memory_manager.go:354] "RemoveStaleState removing state" podUID="d83e9c26-344d-455c-bb51-d378c8016381" containerName="registry-server" Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.844125 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.864248 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nlwck"] Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.915944 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-c4h9f_must-gather-8j4f9_0d220da7-e30a-4dde-9ae8-c10ada1875f8/gather/0.log" Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.942623 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ch26l\" (UniqueName: \"kubernetes.io/projected/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-kube-api-access-ch26l\") pod \"redhat-marketplace-nlwck\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.942818 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-utilities\") pod \"redhat-marketplace-nlwck\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:16 crc kubenswrapper[4804]: I0128 12:41:16.942868 4804 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-catalog-content\") pod \"redhat-marketplace-nlwck\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:17 crc kubenswrapper[4804]: I0128 12:41:17.044273 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-utilities\") pod \"redhat-marketplace-nlwck\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:17 crc kubenswrapper[4804]: I0128 12:41:17.044324 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-catalog-content\") pod \"redhat-marketplace-nlwck\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:17 crc kubenswrapper[4804]: I0128 12:41:17.044409 4804 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ch26l\" (UniqueName: \"kubernetes.io/projected/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-kube-api-access-ch26l\") pod \"redhat-marketplace-nlwck\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:17 crc kubenswrapper[4804]: I0128 12:41:17.045096 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-utilities\") pod \"redhat-marketplace-nlwck\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:17 crc kubenswrapper[4804]: I0128 12:41:17.045261 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-catalog-content\") pod \"redhat-marketplace-nlwck\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:17 crc kubenswrapper[4804]: I0128 12:41:17.067065 4804 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ch26l\" (UniqueName: \"kubernetes.io/projected/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-kube-api-access-ch26l\") pod \"redhat-marketplace-nlwck\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:17 crc kubenswrapper[4804]: I0128 12:41:17.167097 4804 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:17 crc kubenswrapper[4804]: I0128 12:41:17.628187 4804 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nlwck"] Jan 28 12:41:17 crc kubenswrapper[4804]: I0128 12:41:17.695552 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nlwck" event={"ID":"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49","Type":"ContainerStarted","Data":"922433d8b0899e9096a4bfb7dca7688b52a594e4b98b56b30120e33078c90694"} Jan 28 12:41:18 crc kubenswrapper[4804]: I0128 12:41:18.741565 4804 generic.go:334] "Generic (PLEG): container finished" podID="008dc2bf-7f07-41f9-88c3-b32ee3ec2b49" containerID="bd5825853048582cac5e25e4b49fcb4cbfcfe1ee8c8d760cd4d2c55b9efbdd1a" exitCode=0 Jan 28 12:41:18 crc kubenswrapper[4804]: I0128 12:41:18.741653 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nlwck" event={"ID":"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49","Type":"ContainerDied","Data":"bd5825853048582cac5e25e4b49fcb4cbfcfe1ee8c8d760cd4d2c55b9efbdd1a"} Jan 28 12:41:19 crc kubenswrapper[4804]: I0128 12:41:19.750343 4804 generic.go:334] "Generic (PLEG): container finished" podID="008dc2bf-7f07-41f9-88c3-b32ee3ec2b49" containerID="06247d04f5bceda21ba578b08979760305b1a0fbee4f82b45062dbe3707c5c79" exitCode=0 Jan 28 12:41:19 crc kubenswrapper[4804]: I0128 12:41:19.750453 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nlwck" event={"ID":"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49","Type":"ContainerDied","Data":"06247d04f5bceda21ba578b08979760305b1a0fbee4f82b45062dbe3707c5c79"} Jan 28 12:41:20 crc kubenswrapper[4804]: I0128 12:41:20.758652 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nlwck" event={"ID":"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49","Type":"ContainerStarted","Data":"c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c"} Jan 28 12:41:20 crc kubenswrapper[4804]: I0128 12:41:20.777427 4804 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nlwck" podStartSLOduration=3.377320085 podStartE2EDuration="4.777408158s" podCreationTimestamp="2026-01-28 12:41:16 +0000 UTC" firstStartedPulling="2026-01-28 12:41:18.743659138 +0000 UTC m=+4754.538539122" lastFinishedPulling="2026-01-28 12:41:20.143747211 +0000 UTC m=+4755.938627195" observedRunningTime="2026-01-28 12:41:20.773794805 +0000 UTC m=+4756.568674789" watchObservedRunningTime="2026-01-28 12:41:20.777408158 +0000 UTC m=+4756.572288142" Jan 28 12:41:23 crc kubenswrapper[4804]: I0128 12:41:23.873649 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-c4h9f/must-gather-8j4f9"] Jan 28 12:41:23 crc kubenswrapper[4804]: I0128 12:41:23.874339 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-c4h9f/must-gather-8j4f9" podUID="0d220da7-e30a-4dde-9ae8-c10ada1875f8" containerName="copy" containerID="cri-o://d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480" gracePeriod=2 Jan 28 12:41:23 crc kubenswrapper[4804]: I0128 12:41:23.879992 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-c4h9f/must-gather-8j4f9"] Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.231307 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-c4h9f_must-gather-8j4f9_0d220da7-e30a-4dde-9ae8-c10ada1875f8/copy/0.log" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.232550 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4h9f/must-gather-8j4f9" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.350706 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d220da7-e30a-4dde-9ae8-c10ada1875f8-must-gather-output\") pod \"0d220da7-e30a-4dde-9ae8-c10ada1875f8\" (UID: \"0d220da7-e30a-4dde-9ae8-c10ada1875f8\") " Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.351005 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctfdr\" (UniqueName: \"kubernetes.io/projected/0d220da7-e30a-4dde-9ae8-c10ada1875f8-kube-api-access-ctfdr\") pod \"0d220da7-e30a-4dde-9ae8-c10ada1875f8\" (UID: \"0d220da7-e30a-4dde-9ae8-c10ada1875f8\") " Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.356397 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0d220da7-e30a-4dde-9ae8-c10ada1875f8-kube-api-access-ctfdr" (OuterVolumeSpecName: "kube-api-access-ctfdr") pod "0d220da7-e30a-4dde-9ae8-c10ada1875f8" (UID: "0d220da7-e30a-4dde-9ae8-c10ada1875f8"). InnerVolumeSpecName "kube-api-access-ctfdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.441348 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0d220da7-e30a-4dde-9ae8-c10ada1875f8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "0d220da7-e30a-4dde-9ae8-c10ada1875f8" (UID: "0d220da7-e30a-4dde-9ae8-c10ada1875f8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.452898 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ctfdr\" (UniqueName: \"kubernetes.io/projected/0d220da7-e30a-4dde-9ae8-c10ada1875f8-kube-api-access-ctfdr\") on node \"crc\" DevicePath \"\"" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.452946 4804 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/0d220da7-e30a-4dde-9ae8-c10ada1875f8-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.788357 4804 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-c4h9f_must-gather-8j4f9_0d220da7-e30a-4dde-9ae8-c10ada1875f8/copy/0.log" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.789063 4804 generic.go:334] "Generic (PLEG): container finished" podID="0d220da7-e30a-4dde-9ae8-c10ada1875f8" containerID="d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480" exitCode=143 Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.789130 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4h9f/must-gather-8j4f9" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.789141 4804 scope.go:117] "RemoveContainer" containerID="d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.806664 4804 scope.go:117] "RemoveContainer" containerID="c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.868470 4804 scope.go:117] "RemoveContainer" containerID="d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480" Jan 28 12:41:24 crc kubenswrapper[4804]: E0128 12:41:24.868834 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480\": container with ID starting with d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480 not found: ID does not exist" containerID="d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.868865 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480"} err="failed to get container status \"d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480\": rpc error: code = NotFound desc = could not find container \"d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480\": container with ID starting with d3c5183fe314f89fc3dd109b47c80aaf5118577872a603b04fad7ce8f2a48480 not found: ID does not exist" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.868912 4804 scope.go:117] "RemoveContainer" containerID="c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f" Jan 28 12:41:24 crc kubenswrapper[4804]: E0128 12:41:24.869202 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f\": container with ID starting with c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f not found: ID does not exist" containerID="c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.869235 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f"} err="failed to get container status \"c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f\": rpc error: code = NotFound desc = could not find container \"c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f\": container with ID starting with c49d4ca18e20288058920d9cf6ed340e1b80269dcf96b551a00e7a75a9065d3f not found: ID does not exist" Jan 28 12:41:24 crc kubenswrapper[4804]: I0128 12:41:24.925547 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0d220da7-e30a-4dde-9ae8-c10ada1875f8" path="/var/lib/kubelet/pods/0d220da7-e30a-4dde-9ae8-c10ada1875f8/volumes" Jan 28 12:41:27 crc kubenswrapper[4804]: I0128 12:41:27.167944 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:27 crc kubenswrapper[4804]: I0128 12:41:27.168501 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:27 crc kubenswrapper[4804]: I0128 12:41:27.210485 4804 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:27 crc kubenswrapper[4804]: I0128 12:41:27.851613 4804 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:27 crc kubenswrapper[4804]: I0128 12:41:27.901933 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nlwck"] Jan 28 12:41:29 crc kubenswrapper[4804]: I0128 12:41:29.823133 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nlwck" podUID="008dc2bf-7f07-41f9-88c3-b32ee3ec2b49" containerName="registry-server" containerID="cri-o://c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c" gracePeriod=2 Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.277610 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.348571 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-utilities\") pod \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.348677 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-catalog-content\") pod \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.348946 4804 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ch26l\" (UniqueName: \"kubernetes.io/projected/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-kube-api-access-ch26l\") pod \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\" (UID: \"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49\") " Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.350858 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-utilities" (OuterVolumeSpecName: "utilities") pod "008dc2bf-7f07-41f9-88c3-b32ee3ec2b49" (UID: "008dc2bf-7f07-41f9-88c3-b32ee3ec2b49"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.355412 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-kube-api-access-ch26l" (OuterVolumeSpecName: "kube-api-access-ch26l") pod "008dc2bf-7f07-41f9-88c3-b32ee3ec2b49" (UID: "008dc2bf-7f07-41f9-88c3-b32ee3ec2b49"). InnerVolumeSpecName "kube-api-access-ch26l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.371965 4804 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "008dc2bf-7f07-41f9-88c3-b32ee3ec2b49" (UID: "008dc2bf-7f07-41f9-88c3-b32ee3ec2b49"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.451252 4804 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-utilities\") on node \"crc\" DevicePath \"\"" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.451542 4804 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.451632 4804 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ch26l\" (UniqueName: \"kubernetes.io/projected/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49-kube-api-access-ch26l\") on node \"crc\" DevicePath \"\"" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.831244 4804 generic.go:334] "Generic (PLEG): container finished" podID="008dc2bf-7f07-41f9-88c3-b32ee3ec2b49" containerID="c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c" exitCode=0 Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.831291 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nlwck" event={"ID":"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49","Type":"ContainerDied","Data":"c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c"} Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.831334 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nlwck" event={"ID":"008dc2bf-7f07-41f9-88c3-b32ee3ec2b49","Type":"ContainerDied","Data":"922433d8b0899e9096a4bfb7dca7688b52a594e4b98b56b30120e33078c90694"} Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.831336 4804 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nlwck" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.831355 4804 scope.go:117] "RemoveContainer" containerID="c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.850122 4804 scope.go:117] "RemoveContainer" containerID="06247d04f5bceda21ba578b08979760305b1a0fbee4f82b45062dbe3707c5c79" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.866099 4804 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nlwck"] Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.872266 4804 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nlwck"] Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.893468 4804 scope.go:117] "RemoveContainer" containerID="bd5825853048582cac5e25e4b49fcb4cbfcfe1ee8c8d760cd4d2c55b9efbdd1a" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.908260 4804 scope.go:117] "RemoveContainer" containerID="c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c" Jan 28 12:41:30 crc kubenswrapper[4804]: E0128 12:41:30.908694 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c\": container with ID starting with c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c not found: ID does not exist" containerID="c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.908735 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c"} err="failed to get container status \"c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c\": rpc error: code = NotFound desc = could not find container \"c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c\": container with ID starting with c554e422593ecfd19ab3752b22a2e4014634f94b857653dfddc1e8c979473d0c not found: ID does not exist" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.908761 4804 scope.go:117] "RemoveContainer" containerID="06247d04f5bceda21ba578b08979760305b1a0fbee4f82b45062dbe3707c5c79" Jan 28 12:41:30 crc kubenswrapper[4804]: E0128 12:41:30.909189 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06247d04f5bceda21ba578b08979760305b1a0fbee4f82b45062dbe3707c5c79\": container with ID starting with 06247d04f5bceda21ba578b08979760305b1a0fbee4f82b45062dbe3707c5c79 not found: ID does not exist" containerID="06247d04f5bceda21ba578b08979760305b1a0fbee4f82b45062dbe3707c5c79" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.909231 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06247d04f5bceda21ba578b08979760305b1a0fbee4f82b45062dbe3707c5c79"} err="failed to get container status \"06247d04f5bceda21ba578b08979760305b1a0fbee4f82b45062dbe3707c5c79\": rpc error: code = NotFound desc = could not find container \"06247d04f5bceda21ba578b08979760305b1a0fbee4f82b45062dbe3707c5c79\": container with ID starting with 06247d04f5bceda21ba578b08979760305b1a0fbee4f82b45062dbe3707c5c79 not found: ID does not exist" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.909260 4804 scope.go:117] "RemoveContainer" containerID="bd5825853048582cac5e25e4b49fcb4cbfcfe1ee8c8d760cd4d2c55b9efbdd1a" Jan 28 12:41:30 crc kubenswrapper[4804]: E0128 12:41:30.909803 4804 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd5825853048582cac5e25e4b49fcb4cbfcfe1ee8c8d760cd4d2c55b9efbdd1a\": container with ID starting with bd5825853048582cac5e25e4b49fcb4cbfcfe1ee8c8d760cd4d2c55b9efbdd1a not found: ID does not exist" containerID="bd5825853048582cac5e25e4b49fcb4cbfcfe1ee8c8d760cd4d2c55b9efbdd1a" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.909844 4804 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd5825853048582cac5e25e4b49fcb4cbfcfe1ee8c8d760cd4d2c55b9efbdd1a"} err="failed to get container status \"bd5825853048582cac5e25e4b49fcb4cbfcfe1ee8c8d760cd4d2c55b9efbdd1a\": rpc error: code = NotFound desc = could not find container \"bd5825853048582cac5e25e4b49fcb4cbfcfe1ee8c8d760cd4d2c55b9efbdd1a\": container with ID starting with bd5825853048582cac5e25e4b49fcb4cbfcfe1ee8c8d760cd4d2c55b9efbdd1a not found: ID does not exist" Jan 28 12:41:30 crc kubenswrapper[4804]: I0128 12:41:30.923948 4804 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="008dc2bf-7f07-41f9-88c3-b32ee3ec2b49" path="/var/lib/kubelet/pods/008dc2bf-7f07-41f9-88c3-b32ee3ec2b49/volumes" Jan 28 12:41:42 crc kubenswrapper[4804]: I0128 12:41:42.582424 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:41:42 crc kubenswrapper[4804]: I0128 12:41:42.583186 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:41:58 crc kubenswrapper[4804]: E0128 12:41:58.263158 4804 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/systemd-hostnamed.service\": RecentStats: unable to find data in memory cache]" Jan 28 12:42:12 crc kubenswrapper[4804]: I0128 12:42:12.581712 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:42:12 crc kubenswrapper[4804]: I0128 12:42:12.582272 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:42:42 crc kubenswrapper[4804]: I0128 12:42:42.582633 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:42:42 crc kubenswrapper[4804]: I0128 12:42:42.583346 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 28 12:42:42 crc kubenswrapper[4804]: I0128 12:42:42.583412 4804 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" Jan 28 12:42:42 crc kubenswrapper[4804]: I0128 12:42:42.584154 4804 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3d962111bbfc2c2fbb1c4945c75c4824c37b7d2f4b777899d9eff306e2dd21a4"} pod="openshift-machine-config-operator/machine-config-daemon-slkk8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 28 12:42:42 crc kubenswrapper[4804]: I0128 12:42:42.584235 4804 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" containerID="cri-o://3d962111bbfc2c2fbb1c4945c75c4824c37b7d2f4b777899d9eff306e2dd21a4" gracePeriod=600 Jan 28 12:42:43 crc kubenswrapper[4804]: I0128 12:42:43.324989 4804 generic.go:334] "Generic (PLEG): container finished" podID="d901be89-84b0-4249-9548-2e626a112a4c" containerID="3d962111bbfc2c2fbb1c4945c75c4824c37b7d2f4b777899d9eff306e2dd21a4" exitCode=0 Jan 28 12:42:43 crc kubenswrapper[4804]: I0128 12:42:43.325081 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerDied","Data":"3d962111bbfc2c2fbb1c4945c75c4824c37b7d2f4b777899d9eff306e2dd21a4"} Jan 28 12:42:43 crc kubenswrapper[4804]: I0128 12:42:43.325369 4804 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" event={"ID":"d901be89-84b0-4249-9548-2e626a112a4c","Type":"ContainerStarted","Data":"ec3bd661a19a2cd11869dadca6b31f34237816cc3d7caece0577c2a01a50e5db"} Jan 28 12:42:43 crc kubenswrapper[4804]: I0128 12:42:43.325398 4804 scope.go:117] "RemoveContainer" containerID="aa7b691a0162413aa60c5d611acc75a1f0b5a5e46231d7b45b913d7cdfd99355" Jan 28 12:44:42 crc kubenswrapper[4804]: I0128 12:44:42.582744 4804 patch_prober.go:28] interesting pod/machine-config-daemon-slkk8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 28 12:44:42 crc kubenswrapper[4804]: I0128 12:44:42.583282 4804 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-slkk8" podUID="d901be89-84b0-4249-9548-2e626a112a4c" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515136402307024446 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015136402310017355 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015136370162016511 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015136370162015461 5ustar corecore